Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan
The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.
Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease
Jie, Biao; Liu, Mingxia; Liu, Jun
2016-01-01
Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313
Zhou, Nan; Guo, Tingting; Zheng, Huanhuan; Pan, Xia; Chu, Chen; Dou, Xin; Li, Ming; Liu, Song; Zhu, Lijing; Liu, Baorui; Chen, Weibo; He, Jian; Yan, Jing; Zhou, Zhengyang; Yang, Xiaofeng
2017-01-01
We investigated apparent diffusion coefficient (ADC) histogram analysis to evaluate radiation-induced parotid damage and predict xerostomia degrees in nasopharyngeal carcinoma (NPC) patients receiving radiotherapy. The imaging of bilateral parotid glands in NPC patients was conducted 2 weeks before radiotherapy (time point 1), one month after radiotherapy (time point 2), and four months after radiotherapy (time point 3). From time point 1 to 2, parotid volume, skewness, and kurtosis decreased (P < 0.001, = 0.001, and < 0.001, respectively), but all other ADC histogram parameters increased (all P < 0.001, except P = 0.006 for standard deviation [SD]). From time point 2 to 3, parotid volume continued to decrease (P = 0.022), and SD, 75th and 90th percentiles continued to increase (P = 0.024, 0.010, and 0.006, respectively). Early change rates of parotid ADCmean, ADCmin, kurtosis, and 25th, 50th, 75th, 90th percentiles (from time point 1 to 2) correlated with late parotid atrophy rate (from time point 1 to 3) (all P < 0.05). Multiple linear regression analysis revealed correlations among parotid volume, time point, and ADC histogram parameters. Early mean change rates for bilateral parotid SD and ADCmax could predict late xerostomia degrees at seven months after radiotherapy (three months after time point 3) with AUC of 0.781 and 0.818 (P = 0.014, 0.005, respectively). ADC histogram parameters were reproducible (intraclass correlation coefficient, 0.830 - 0.999). ADC histogram analysis could be used to evaluate radiation-induced parotid damage noninvasively, and predict late xerostomia degrees of NPC patients treated with radiotherapy. PMID:29050274
Zhou, Nan; Guo, Tingting; Zheng, Huanhuan; Pan, Xia; Chu, Chen; Dou, Xin; Li, Ming; Liu, Song; Zhu, Lijing; Liu, Baorui; Chen, Weibo; He, Jian; Yan, Jing; Zhou, Zhengyang; Yang, Xiaofeng
2017-09-19
We investigated apparent diffusion coefficient (ADC) histogram analysis to evaluate radiation-induced parotid damage and predict xerostomia degrees in nasopharyngeal carcinoma (NPC) patients receiving radiotherapy. The imaging of bilateral parotid glands in NPC patients was conducted 2 weeks before radiotherapy (time point 1), one month after radiotherapy (time point 2), and four months after radiotherapy (time point 3). From time point 1 to 2, parotid volume, skewness, and kurtosis decreased ( P < 0.001, = 0.001, and < 0.001, respectively), but all other ADC histogram parameters increased (all P < 0.001, except P = 0.006 for standard deviation [SD]). From time point 2 to 3, parotid volume continued to decrease ( P = 0.022), and SD, 75 th and 90 th percentiles continued to increase ( P = 0.024, 0.010, and 0.006, respectively). Early change rates of parotid ADC mean , ADC min , kurtosis, and 25 th , 50 th , 75 th , 90 th percentiles (from time point 1 to 2) correlated with late parotid atrophy rate (from time point 1 to 3) (all P < 0.05). Multiple linear regression analysis revealed correlations among parotid volume, time point, and ADC histogram parameters. Early mean change rates for bilateral parotid SD and ADC max could predict late xerostomia degrees at seven months after radiotherapy (three months after time point 3) with AUC of 0.781 and 0.818 ( P = 0.014, 0.005, respectively). ADC histogram parameters were reproducible (intraclass correlation coefficient, 0.830 - 0.999). ADC histogram analysis could be used to evaluate radiation-induced parotid damage noninvasively, and predict late xerostomia degrees of NPC patients treated with radiotherapy.
Silveira, Hevely Saray Lima; Simões-Zenari, Marcia; Kulcsar, Marco Aurélio; Cernea, Claudio Roberto; Nemr, Kátia
2017-10-27
The supracricoid partial laryngectomy allows the preservation of laryngeal functions with good local cancer control. To assess laryngeal configuration and voice analysis data following the performance of a combination of two vocal exercises: the prolonged /b/vocal exercise combined with the vowel /e/ using chest and arm pushing with different durations among individuals who have undergone supracricoid laryngectomy. Eleven patients undergoing partial laryngectomy supracricoid with cricohyoidoepiglottopexy (CHEP) were evaluated using voice recording. Four judges performed separately a perceptive-vocal analysis of hearing voices, with random samples. For the analysis of intrajudge reliability, repetitions of 70% of the voices were done. Intraclass correlation coefficient was used to analyze the reliability of the judges. For an analysis of each judge to the comparison between zero time (time point 0), after the first series of exercises (time point 1), after the second series (time point 2), after the third series (time point 3), after the fourth series (time point 4), and after the fifth and final series (time point 5), the Friedman test was used with a significance level of 5%. The data relative to the configuration of the larynx were subjected to a descriptive analysis. In the evaluation, were considered the judge results 1 which have greater reliability. There was an improvement in the general level of vocal, roughness, and breathiness deviations from time point 4 [T4]. The prolonged /b/vocal exercise, combined with the vowel /e/ using chest- and arm-pushing exercises, was associated with an improvement in the overall grade of vocal deviation, roughness, and breathiness starting at minute 4 among patients who had undergone supracricoid laryngectomy with CHEP reconstruction. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej
2015-01-01
The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.
Tanaka, Yuji; Yamashita, Takako; Nagoshi, Masayasu
2017-04-01
Hydrocarbon contamination introduced during point, line and map analyses in a field emission electron probe microanalysis (FE-EPMA) was investigated to enable reliable quantitative analysis of trace amounts of carbon in steels. The increment of contamination on pure iron in point analysis is proportional to the number of iterations of beam irradiation, but not to the accumulated irradiation time. A combination of a longer dwell time and single measurement with a liquid nitrogen (LN2) trap as an anti-contamination device (ACD) is sufficient for a quantitative point analysis. However, in line and map analyses, contamination increases with irradiation time in addition to the number of iterations, even though the LN2 trap and a plasma cleaner are used as ACDs. Thus, a shorter dwell time and single measurement are preferred for line and map analyses, although it is difficult to eliminate the influence of contamination. While ring-like contamination around the irradiation point grows during electron-beam irradiation, contamination at the irradiation point increases during blanking time after irradiation. This can explain the increment of contamination in iterative point analysis as well as in line and map analyses. Among the ACDs, which are tested in this study, specimen heating at 373 K has a significant contamination inhibition effect. This technique makes it possible to obtain line and map analysis data with minimum influence of contamination. The above-mentioned FE-EPMA data are presented and discussed in terms of the contamination-formation mechanisms and the preferable experimental conditions for the quantification of trace carbon in steels. © The Author 2016. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A patient-specific segmentation framework for longitudinal MR images of traumatic brain injury
NASA Astrophysics Data System (ADS)
Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido
2012-02-01
Traumatic brain injury (TBI) is a major cause of death and disability worldwide. Robust, reproducible segmentations of MR images with TBI are crucial for quantitative analysis of recovery and treatment efficacy. However, this is a significant challenge due to severe anatomy changes caused by edema (swelling), bleeding, tissue deformation, skull fracture, and other effects related to head injury. In this paper, we introduce a multi-modal image segmentation framework for longitudinal TBI images. The framework is initialized through manual input of primary lesion sites at each time point, which are then refined by a joint approach composed of Bayesian segmentation and construction of a personalized atlas. The personalized atlas construction estimates the average of the posteriors of the Bayesian segmentation at each time point and warps the average back to each time point to provide the updated priors for Bayesian segmentation. The difference between our approach and segmenting longitudinal images independently is that we use the information from all time points to improve the segmentations. Given a manual initialization, our framework automatically segments healthy structures (white matter, grey matter, cerebrospinal fluid) as well as different lesions such as hemorrhagic lesions and edema. Our framework can handle different sets of modalities at each time point, which provides flexibility in analyzing clinical scans. We show results on three subjects with acute baseline scans and chronic follow-up scans. The results demonstrate that joint analysis of all the points yields improved segmentation compared to independent analysis of the two time points.
Performance analysis of a dual-tree algorithm for computing spatial distance histograms
Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni
2011-01-01
Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753
Analysis of Deep Seafloor Arrivals Observed on NPAL04
2012-12-03
transmission station to the scattering point (black line) to compute the time spent on the PE-predicted path to the scattering point. This time would...arrives at the OBSs at times corresponding to caustics of the PE predicted time fronts, there are large amplitude, late arrivals that occur between... caustics and even after the PE predicted coda. Similar analysis was done for T500 to T2300 with similar results and is discussed in Section 4 of
ERIC Educational Resources Information Center
de Rooij, Mark; Heiser, Willem J.
2005-01-01
Although RC(M)-association models have become a generally useful tool for the analysis of cross-classified data, the graphical representation resulting from such an analysis can at times be misleading. The relationships present between row category points and column category points cannot be interpreted by inter point distances but only through…
Venter, Anre; Maxwell, Scott E; Bolig, Erika
2002-06-01
Adding a pretest as a covariate to a randomized posttest-only design increases statistical power, as does the addition of intermediate time points to a randomized pretest-posttest design. Although typically 5 waves of data are required in this instance to produce meaningful gains in power, a 3-wave intensive design allows the evaluation of the straight-line growth model and may reduce the effect of missing data. The authors identify the statistically most powerful method of data analysis in the 3-wave intensive design. If straight-line growth is assumed, the pretest-posttest slope must assume fairly extreme values for the intermediate time point to increase power beyond the standard analysis of covariance on the posttest with the pretest as covariate, ignoring the intermediate time point.
Geolocation and Pointing Accuracy Analysis for the WindSat Sensor
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.
2006-01-01
Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.
Ball, Kevin A; Best, Russell J; Wrigley, Tim V
2003-07-01
In this study, we examined the relationships between body sway, aim point fluctuation and performance in rifle shooting on an inter- and intra-individual basis. Six elite shooters performed 20 shots under competition conditions. For each shot, body sway parameters and four aim point fluctuation parameters were quantified for the time periods 5 s to shot, 3 s to shot and 1 s to shot. Three parameters were used to indicate performance. An AMTI LG6-4 force plate was used to measure body sway parameters, while a SCATT shooting analysis system was used to measure aim point fluctuation and shooting performance. Multiple regression analysis indicated that body sway was related to performance for four shooters. Also, body sway was related to aim point fluctuation for all shooters. These relationships were specific to the individual, with the strength of association, parameters of importance and time period of importance different for different shooters. Correlation analysis of significant regressions indicated that, as body sway increased, performance decreased and aim point fluctuation increased for most relationships. We conclude that body sway and aim point fluctuation are important in elite rifle shooting and performance errors are highly individual-specific at this standard. Individual analysis should be a priority when examining elite sports performance.
Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model.
Musekiwa, Alfred; Manda, Samuel O M; Mwambi, Henry G; Chen, Ding-Geng
2016-01-01
Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results.
Queueing analysis of a canonical model of real-time multiprocessors
NASA Technical Reports Server (NTRS)
Krishna, C. M.; Shin, K. G.
1983-01-01
A logical classification of multiprocessor structures from the point of view of control applications is presented. A computation of the response time distribution for a canonical model of a real time multiprocessor is presented. The multiprocessor is approximated by a blocking model. Two separate models are derived: one created from the system's point of view, and the other from the point of view of an incoming task.
Craig, Darren G; Kitto, Laura; Zafar, Sara; Reid, Thomas W D J; Martin, Kirsty G; Davidson, Janice S; Hayes, Peter C; Simpson, Kenneth J
2014-09-01
The innate immune system is profoundly dysregulated in paracetamol (acetaminophen)-induced liver injury. The neutrophil-lymphocyte ratio (NLR) is a simple bedside index with prognostic value in a number of inflammatory conditions. To evaluate the prognostic accuracy of the NLR in patients with significant liver injury following single time-point and staggered paracetamol overdoses. Time-course analysis of 100 single time-point and 50 staggered paracetamol overdoses admitted to a tertiary liver centre. Timed laboratory samples were correlated with time elapsed after overdose or admission, respectively, and the NLR was calculated. A total of 49/100 single time-point patients developed hepatic encephalopathy (HE). Median NLRs were higher at both 72 (P=0.0047) and 96 h after overdose (P=0.0041) in single time-point patients who died or were transplanted. Maximum NLR values by 96 h were associated with increasing HE grade (P=0.0005). An NLR of more than 16.7 during the first 96 h following overdose was independently associated with the development of HE [odds ratio 5.65 (95% confidence interval 1.67-19.13), P=0.005]. Maximum NLR values by 96 h were strongly associated with the requirement for intracranial pressure monitoring (P<0.0001), renal replacement therapy (P=0.0002) and inotropic support (P=0.0005). In contrast, in the staggered overdose cohort, the NLR was not associated with adverse outcomes or death/transplantation either at admission or subsequently. The NLR is a simple test which is strongly associated with adverse outcomes following single time-point, but not staggered, paracetamol overdoses. Future studies should assess the value of incorporating the NLR into existing prognostic and triage indices of single time-point paracetamol overdose.
Sedentary Behaviour Profiling of Office Workers: A Sensitivity Analysis of Sedentary Cut-Points
Boerema, Simone T.; Essink, Gerard B.; Tönis, Thijs M.; van Velsen, Lex; Hermens, Hermie J.
2015-01-01
Measuring sedentary behaviour and physical activity with wearable sensors provides detailed information on activity patterns and can serve health interventions. At the basis of activity analysis stands the ability to distinguish sedentary from active time. As there is no consensus regarding the optimal cut-point for classifying sedentary behaviour, we studied the consequences of using different cut-points for this type of analysis. We conducted a battery of sitting and walking activities with 14 office workers, wearing the Promove 3D activity sensor to determine the optimal cut-point (in counts per minute (m·s−2)) for classifying sedentary behaviour. Then, 27 office workers wore the sensor for five days. We evaluated the sensitivity of five sedentary pattern measures for various sedentary cut-points and found an optimal cut-point for sedentary behaviour of 1660 × 10−3 m·s−2. Total sedentary time was not sensitive to cut-point changes within ±10% of this optimal cut-point; other sedentary pattern measures were not sensitive to changes within the ±20% interval. The results from studies analyzing sedentary patterns, using different cut-points, can be compared within these boundaries. Furthermore, commercial, hip-worn activity trackers can implement feedback and interventions on sedentary behaviour patterns, using these cut-points. PMID:26712758
Geo-Cultural Analysis Tool (trademark) (GCAT)
2008-03-10
Construction Engineering Research Laboratory (CERL) Champaign, IL Cold Regions Researc and Engineering Laboratory (CRREL) Hanover, NH European...Point B? Time/Day: Mid afternoon, Friday Mosque Leisure Market /Retail Area Entertainment District Departure Point Destination Point Transportation
Patel, Nitesh V; Sundararajan, Sri; Keller, Irwin; Danish, Shabbar
2018-01-01
Objective: Magnetic resonance (MR)-guided stereotactic laser amygdalohippocampectomy is a minimally invasive procedure for the treatment of refractory epilepsy in patients with mesial temporal sclerosis. Limited data exist on post-ablation volumetric trends associated with the procedure. Methods: 10 patients with mesial temporal sclerosis underwent MR-guided stereotactic laser amygdalohippocampectomy. Three independent raters computed ablation volumes at the following time points: pre-ablation (PreA), immediate post-ablation (IPA), 24 hours post-ablation (24PA), first follow-up post-ablation (FPA), and greater than three months follow-up post-ablation (>3MPA), using OsiriX DICOM Viewer (Pixmeo, Bernex, Switzerland). Statistical trends in post-ablation volumes were determined for the time points. Results: MR-guided stereotactic laser amygdalohippocampectomy produces a rapid rise and distinct peak in post-ablation volume immediately following the procedure. IPA volumes are significantly higher than all other time points. Comparing individual time points within each raters dataset (intra-rater), a significant difference was seen between the IPA time point and all others. There was no statistical difference between the 24PA, FPA, and >3MPA time points. A correlation analysis demonstrated the strongest correlations at the 24PA (r=0.97), FPA (r=0.95), and 3MPA time points (r=0.99), with a weaker correlation at IPA (r=0.92). Conclusion: MR-guided stereotactic laser amygdalohippocampectomy produces a maximal increase in post-ablation volume immediately following the procedure, which decreases and stabilizes at 24 hours post-procedure and beyond three months follow-up. Based on the correlation analysis, the lower inter-rater reliability at the IPA time point suggests it may be less accurate to assess volume at this time point. We recommend post-ablation volume assessments be made at least 24 hours post-selective ablation of the amygdalohippocampal complex (SLAH).
Jastrzębski, Zbigniew; Kiszałkiewicz, Justyna; Brzeziański, Michał; Pastuszak-Lewandoska, Dorota; Radzimińki, Łukasz; Brzeziańska-Lasota, Ewa; Jegier, Anna
2017-01-01
Recently studies have shown that, depending on the type of training and its duration, the expression levels of selected circulating myomiRNAs (c-miR-27a,b, c-miR-29a,b,c, c-miR-133a) differ and correlate with the physiological indicators of adaptation to physical activity. To analyse the expression of selected classes of miRNAs in soccer players during different periods of their training cycle. The study involved 22 soccer players aged 17-18 years. The multi-stage 20-m shuttle run test was used to estimate VO2 max among the soccer players. Samples serum were collected at baseline (time point I), after one week (time point II), and after 2 months of training (time point III). The analysis of the relative quantification (RQ) level of three exosomal myomiRNAs, c-miRNA-27b, c-miR-29a, and c-miR-133, was performed by quantitative polymerase chain reaction (qPCR) at three time points – before the training, after 1 week of training and after the completion of two months of competition season training. The expression analysis showed low expression levels (according to references) of all evaluated myomiRNAs before the training cycle. Analysis performed after a week of the training cycle and after completion of the entire training cycle showed elevated expression of all tested myomiRNAs. Statistical analysis revealed significant differences between the first and the second time point in soccer players for c-miR-27b and c-miR-29a; between the first and the third time point for c-miR-27b and c-miR-29a; and between the second and the third time point for c-miR-27b. Statistical analysis showed a positive correlation between the levels of c-miR-29a and VO2 max. Two months of training affected the expression of c-miR-27b and miR-29a in soccer players. The increased expression of c-miR-27b and c-miR-29 with training could indicate their probable role in the adaptation process that takes place in the muscular system. Possibly, the expression of c-miR-29a will be found to be involved in cardiorespiratory fitness in future research. PMID:29472735
A method for analyzing temporal patterns of variability of a time series from Poincare plots.
Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E
2012-07-01
The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.
Analysis of Giardin expression during encystation of Giardia lamblia
USDA-ARS?s Scientific Manuscript database
The present study analyzed giardin transcription in trophozoites and cysts during encystation of Giardia lamblia. Encystment was induced using standard methods, and the number of trophozoites and cysts were counted at various time-points during encystation. At all time points, RNA from both stages...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hofschen, S.; Wolff, I.
1996-08-01
Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are comparedmore » with measurements and show good agreement.« less
Dual keel Space Station payload pointing system design and analysis feasibility study
NASA Technical Reports Server (NTRS)
Smagala, Tom; Class, Brian F.; Bauer, Frank H.; Lebair, Deborah A.
1988-01-01
A Space Station attached Payload Pointing System (PPS) has been designed and analyzed. The PPS is responsible for maintaining fixed payload pointing in the presence of disturbance applied to the Space Station. The payload considered in this analysis is the Solar Optical Telescope. System performance is evaluated via digital time simulations by applying various disturbance forces to the Space Station. The PPS meets the Space Station articulated pointing requirement for all disturbances except Shuttle docking and some centrifuge cases.
Effects of directional uncertainty on visually-guided joystick pointing.
Berryhill, Marian; Kveraga, Kestutis; Hughes, Howard C
2005-02-01
Reaction times generally follow the predictions of Hick's law as stimulus-response uncertainty increases, although notable exceptions include the oculomotor system. Saccadic and smooth pursuit eye movement reaction times are independent of stimulus-response uncertainty. Previous research showed that joystick pointing to targets, a motor analog of saccadic eye movements, is only modestly affected by increased stimulus-response uncertainty; however, a no-uncertainty condition (simple reaction time to 1 possible target) was not included. Here, we re-evaluate manual joystick pointing including a no-uncertainty condition. Analysis indicated simple joystick pointing reaction times were significantly faster than choice reaction times. Choice reaction times (2, 4, or 8 possible target locations) only slightly increased as the number of possible targets increased. These data suggest that, as with joystick tracking (a motor analog of smooth pursuit eye movements), joystick pointing is more closely approximated by a simple/choice step function than the log function predicted by Hick's law.
An analysis of neural receptive field plasticity by point process adaptive filtering
Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor
2001-01-01
Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043
2011-12-01
57 3. Description of SDNs With Lead-Times Greater Than 180 Days...58 4. Analysis of SDNs With Lead-Times Less Than 180 Days .. 61 a. Analysis of Lead-Time by SOS .................................. 63...Time ............................................. 58 Figure 24. Comparison: Percent of Total SDNs by SOS for Total Data Range vs. SDNs With Credit
Crossfit analysis: a novel method to characterize the dynamics of induced plant responses.
Jansen, Jeroen J; van Dam, Nicole M; Hoefsloot, Huub C J; Smilde, Age K
2009-12-16
Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples.
Crossfit analysis: a novel method to characterize the dynamics of induced plant responses
2009-01-01
Background Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. Results This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Conclusions Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples. PMID:20015363
Estimating animal resource selection from telemetry data using point process models
Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.
2013-01-01
To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.
ROLE OF TIMING IN ASSESSMENT OF NERVE REGENERATION
BRENNER, MICHAEL J.; MORADZADEH, ARASH; MYCKATYN, TERENCE M.; TUNG, THOMAS H. H.; MENDEZ, ALLEN B.; HUNTER, DANIEL A.; MACKINNON, SUSAN E.
2014-01-01
Small animal models are indispensable for research on nerve injury and reconstruction, but their superlative regenerative potential may confound experimental interpretation. This study investigated time-dependent neuroregenerative phenomena in rodents. Forty-six Lewis rats were randomized to three nerve allograft groups treated with 2 mg/(kg day) tacrolimus; 5 mg/(kg day) Cyclosporine A; or placebo injection. Nerves were subjected to histomorphometric and walking track analysis at serial time points. Tacrolimus increased fiber density, percent neural tissue, and nerve fiber count and accelerated functional recovery at 40 days, but these differences were undetectable by 70 days. Serial walking track analysis showed a similar pattern of recovery. A ‘blow-through’ effect is observed in rodents whereby an advancing nerve front overcomes an experimental defect given sufficient time, rendering experimental groups indistinguishable at late time points. Selection of validated time points and corroboration in higher animal models are essential prerequisites for the clinical application of basic research on nerve regeneration. PMID:18381659
Lie Symmetry Analysis and Explicit Solutions of the Time Fractional Fifth-Order KdV Equation
Wang, Gang wei; Xu, Tian zhou; Feng, Tao
2014-01-01
In this paper, using the Lie group analysis method, we study the invariance properties of the time fractional fifth-order KdV equation. A systematic research to derive Lie point symmetries to time fractional fifth-order KdV equation is performed. In the sense of point symmetry, all of the vector fields and the symmetry reductions of the fractional fifth-order KdV equation are obtained. At last, by virtue of the sub-equation method, some exact solutions to the fractional fifth-order KdV equation are provided. PMID:24523885
Twisted Gastrulation as a BMP Modulator during Mammary Gland Development and Tumorigenesis
2014-05-01
present at the onset of puberty , roughly 6 weeks of age, but not at later time points we began our Q-PCR analysis at this time point. Analyzing...rudimentary ductal tree that during puberty , pregnancy and lactation undergoes complex morphological changes (Hens and Wysolmerski, 2005; Hovey and Trott...proliferates and elongates into the developing fat pad forming a rudimentary tree. Development is arrested at this point until puberty . At puberty , terminal
ERIC Educational Resources Information Center
Finch, W. Holmes; Shim, Sungok Serena
2018-01-01
Collection and analysis of longitudinal data is an important tool in understanding growth and development over time in a whole range of human endeavors. Ideally, researchers working in the longitudinal framework are able to collect data at more than two points in time, as this will provide them with the potential for a deeper understanding of the…
New clinical insights for transiently evoked otoacoustic emission protocols.
Hatzopoulos, Stavros; Grzanka, Antoni; Martini, Alessandro; Konopka, Wieslaw
2009-08-01
The objective of the study was to optimize the area of a time-frequency analysis and then investigate any stable patterns in the time-frequency structure of otoacoustic emissions in a population of 152 healthy adults sampled over one year. TEOAE recordings were collected from 302 ears in subjects presenting normal hearing and normal impedance values. The responses were analyzed by the Wigner-Ville distribution (WVD). The TF region of analysis was optimized by examining the energy content of various rectangular and triangular TF regions. The TEOAE components from the initial and recordings 12 months later were compared in the optimized TF region. The best region for TF analysis was identified with base point 1 at 2.24 ms and 2466 Hz, base point 2 at 6.72 ms and 2466 Hz, and the top point at 2.24 ms and 5250 Hz. Correlation indices from the TF optimized region were higher, and were statistically significant, than the traditional indices in the selected time window. An analysis of the TF data within a 12-month period indicated a 85% TEOAE component similarity in 90% of the tested subjects.
Tipping point analysis of ocean acoustic noise
NASA Astrophysics Data System (ADS)
Livina, Valerie N.; Brouwer, Albert; Harris, Peter; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen
2018-02-01
We apply tipping point analysis to a large record of ocean acoustic data to identify the main components of the acoustic dynamical system and study possible bifurcations and transitions of the system. The analysis is based on a statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the data using time-series techniques. We analyse long-term and seasonal trends, system states and acoustic fluctuations to reconstruct a one-dimensional stochastic equation to approximate the acoustic dynamical system. We apply potential analysis to acoustic fluctuations and detect several changes in the system states in the past 14 years. These are most likely caused by climatic phenomena. We analyse trends in sound pressure level within different frequency bands and hypothesize a possible anthropogenic impact on the acoustic environment. The tipping point analysis framework provides insight into the structure of the acoustic data and helps identify its dynamic phenomena, correctly reproducing the probability distribution and scaling properties (power-law correlations) of the time series.
Chhabra, Anmol; Quinn, Andrea; Ries, Amanda
2018-01-01
Accurate history collection is integral to medication reconciliation. Studies support pharmacy involvement in the process, but assessment of global time spent is limited. The authors hypothesized the location of a medication-focused interview would impact time spent. The objective was to compare time spent by pharmacists and nurses based on the location of a medication-focused interview. Time spent by the interviewing pharmacist, admitting nurse, and centralized pharmacist verifying admission orders was collected. Patient groups were based on whether the interview was conducted in the emergency department (ED) or medical floor. The primary end point was a composite of the 3 time points. Secondary end points were individual time components and number and types of transcription discrepancies identified during medical floor interviews. Pharmacists and nurses spent an average of ten fewer minutes per ED patient versus a medical floor patient ( P = .028). Secondary end points were not statistically significant. Transcription discrepancies were identified at a rate of 1 in 4 medications. Post hoc analysis revealed the time spent by pharmacists and nurses was 2.4 minutes shorter per medication when interviewed in the ED ( P < .001). The primary outcome was statistically and clinically significant. Limitations included inability to blind and lack of cost-saving analysis. Pharmacist involvement in ED medication reconciliation leads to time savings during the admission process.
Alkhawaldeh, Khaled; Biersack, Hans-J; Henke, Anna; Ezziddin, Samer
2011-06-01
The aim of this study was to assess the utility of dual-time-point F-18 fluorodeoxyglucose positron emission tomography (F-18 FDG PET) in differentiating benign from malignant pleural disease, in patients with non-small-cell lung cancer. A total of 61 patients with non-small-cell lung cancer and pleural effusion were included in this retrospective study. All patients had whole-body FDG PET/CT imaging at 60 ± 10 minutes post-FDG injection, whereas 31 patients had second-time delayed imaging repeated at 90 ± 10 minutes for the chest. Maximum standardized uptake values (SUV(max)) and the average percent change in SUV(max) (%SUV) between time point 1 and time point 2 were calculated. Malignancy was defined using the following criteria: (1) visual assessment using 3-points grading scale; (2) SUV(max) ≥2.4; (3) %SUV ≥ +9; and (4) SUV(max) ≥2.4 and/or %SUV ≥ +9. Analysis of variance test and receiver operating characteristic analysis were used in statistical analysis. P < 0.05 was considered significant. Follow-up revealed 29 patient with malignant pleural disease and 31 patients with benign pleural effusion. The average SUV(max) in malignant effusions was 6.5 ± 4 versus 2.2 ± 0.9 in benign effusions (P < 0.0001). The average %SUV in malignant effusions was +13 ± 10 versus -8 ± 11 in benign effusions (P < 0.0004). Sensitivity, specificity, and accuracy for the 5 criteria were as follows: (1) 86%, 72%, and 79%; (2) 93%, 72%, and 82%; (3) 67%, 94%, and 81%; (4) 100%, 94%, and 97%. Dual-time-point F-18 FDG PET can improve the diagnostic accuracy in differentiating benign from malignant pleural disease, with high sensitivity and good specificity.
Local Stability of AIDS Epidemic Model Through Treatment and Vertical Transmission with Time Delay
NASA Astrophysics Data System (ADS)
Novi W, Cascarilla; Lestari, Dwi
2016-02-01
This study aims to explain stability of the spread of AIDS through treatment and vertical transmission model. Human with HIV need a time to positively suffer AIDS. The existence of a time, human with HIV until positively suffer AIDS can be delayed for a time so that the model acquired is the model with time delay. The model form is a nonlinear differential equation with time delay, SIPTA (susceptible-infected-pre AIDS-treatment-AIDS). Based on SIPTA model analysis results the disease free equilibrium point and the endemic equilibrium point. The disease free equilibrium point with and without time delay are local asymptotically stable if the basic reproduction number is less than one. The endemic equilibrium point will be local asymptotically stable if the time delay is less than the critical value of delay, unstable if the time delay is more than the critical value of delay, and bifurcation occurs if the time delay is equal to the critical value of delay.
Alignment of time-resolved data from high throughput experiments.
Abidi, Nada; Franke, Raimo; Findeisen, Peter; Klawonn, Frank
2016-12-01
To better understand the dynamics of the underlying processes in cells, it is necessary to take measurements over a time course. Modern high-throughput technologies are often used for this purpose to measure the behavior of cell products like metabolites, peptides, proteins, [Formula: see text]RNA or mRNA at different points in time. Compared to classical time series, the number of time points is usually very limited and the measurements are taken at irregular time intervals. The main reasons for this are the costs of the experiments and the fact that the dynamic behavior usually shows a strong reaction and fast changes shortly after a stimulus and then slowly converges to a certain stable state. Another reason might simply be missing values. It is common to repeat the experiments and to have replicates in order to carry out a more reliable analysis. The ideal assumptions that the initial stimulus really started exactly at the same time for all replicates and that the replicates are perfectly synchronized are seldom satisfied. Therefore, there is a need to first adjust or align the time-resolved data before further analysis is carried out. Dynamic time warping (DTW) is considered as one of the common alignment techniques for time series data with equidistant time points. In this paper, we modified the DTW algorithm so that it can align sequences with measurements at different, non-equidistant time points with large gaps in between. This type of data is usually known as time-resolved data characterized by irregular time intervals between measurements as well as non-identical time points for different replicates. This new algorithm can be easily used to align time-resolved data from high-throughput experiments and to come across existing problems such as time scarcity and existing noise in the measurements. We propose a modified method of DTW to adapt requirements imposed by time-resolved data by use of monotone cubic interpolation splines. Our presented approach provides a nonlinear alignment of two sequences that neither need to have equi-distant time points nor measurements at identical time points. The proposed method is evaluated with artificial as well as real data. The software is available as an R package tra (Time-Resolved data Alignment) which is freely available at: http://public.ostfalia.de/klawonn/tra.zip .
Huan, Jinliang; Wang, Lishan; Xing, Li; Qin, Xianju; Feng, Lingbin; Pan, Xiaofeng; Zhu, Ling
2014-01-01
Estrogens are known to regulate the proliferation of breast cancer cells and to alter their cytoarchitectural and phenotypic properties, but the gene networks and pathways by which estrogenic hormones regulate these events are only partially understood. We used global gene expression profiling by Affymetrix GeneChip microarray analysis, with KEGG pathway enrichment, PPI network construction, module analysis and text mining methods to identify patterns and time courses of genes that are either stimulated or inhibited by estradiol (E2) in estrogen receptor (ER)-positive MCF-7 human breast cancer cells. Of the genes queried on the Affymetrix Human Genome U133 plus 2.0 microarray, we identified 628 (12h), 852 (24h) and 880 (48 h) differentially expressed genes (DEGs) that showed a robust pattern of regulation by E2. From pathway enrichment analysis, we found out the changes of metabolic pathways of E2 treated samples at each time point. At 12h time point, the changes of metabolic pathways were mainly focused on pathways in cancer, focal adhesion, and chemokine signaling pathway. At 24h time point, the changes were mainly enriched in neuroactive ligand-receptor interaction, cytokine-cytokine receptor interaction and calcium signaling pathway. At 48 h time point, the significant pathways were pathways in cancer, regulation of actin cytoskeleton, cell adhesion molecules (CAMs), axon guidance and ErbB signaling pathway. Of interest, our PPI network analysis and module analysis found that E2 treatment induced enhancement of PRSS23 at the three time points and PRSS23 was in the central position of each module. Text mining results showed that the important genes of DEGs have relationship with signal pathways, such as ERbB pathway (AREG), Wnt pathway (NDP), MAPK pathway (NTRK3, TH), IP3 pathway (TRA@) and some transcript factors (TCF4, MAF). Our studies highlight the diverse gene networks and metabolic and cell regulatory pathways through which E2 operates to achieve its widespread effects on breast cancer cells. © 2013 Elsevier B.V. All rights reserved.
Jones, Matthew; Lewis, Sarah; Parrott, Steve; Wormall, Stephen; Coleman, Tim
2016-06-01
In pregnant smoking cessation trial participants, to estimate (1) among women abstinent at the end of pregnancy, the proportion who re-start smoking at time-points afterwards (primary analysis) and (2) among all trial participants, the proportion smoking at the end of pregnancy and at selected time-points during the postpartum period (secondary analysis). Trials identified from two Cochrane reviews plus searches of Medline and EMBASE. Twenty-seven trials were included. The included trials were randomized or quasi-randomized trials of within-pregnancy cessation interventions given to smokers who reported abstinence both at end of pregnancy and at one or more defined time-points after birth. Outcomes were validated biochemically and self-reported continuous abstinence from smoking and 7-day point prevalence abstinence. The primary random-effects meta-analysis used longitudinal data to estimate mean pooled proportions of re-starting smoking; a secondary analysis used cross-sectional data to estimate the mean proportions smoking at different postpartum time-points. Subgroup analyses were performed on biochemically validated abstinence. The pooled mean proportion re-starting at 6 months postpartum was 43% [95% confidence interval (CI) = 16-72%, I(2) = 96.7%] (11 trials, 571 abstinent women). The pooled mean proportion smoking at the end of pregnancy was 87% (95% CI = 84-90%, I(2) = 93.2%) and 94% (95% CI = 92-96%, I(2) = 88%) at 6 months postpartum (23 trials, 9262 trial participants). Findings were similar when using biochemically validated abstinence. In clinical trials of smoking cessation interventions during pregnancy only 13% are abstinent at term. Of these, 43% re-start by 6 months postpartum. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
Analysis of EUVE Experiment Results
NASA Technical Reports Server (NTRS)
Horan, Stephen
1996-01-01
A series of tests to validate an antenna pointing concept for spin-stabilized satellites using a data relay satellite are described. These tests show that proper antenna pointing on an inertially-stabilized spacecraft can lead to significant access time through the relay satellite even without active antenna pointing. We summarize the test results, the simulations to model the effects of antenna pattern and space loss, and the expected contact times. We also show how antenna beam width affects the results.
Approximation methods for combined thermal/structural design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Shore, C. P.
1979-01-01
Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.
Cheung, N Wt; Cheung, Y W; Chen, X
2016-06-01
To examine the effects of a permissive attitude towards regular and occasional drug use, life satisfaction, self-esteem, depression, and other psychosocial variables in the drug use of psychoactive drug users. Psychosocial factors that might affect a permissive attitude towards regular / occasional drug use and life satisfaction were further explored. We analysed data of a sample of psychoactive drug users from a longitudinal survey of psychoactive drug abusers in Hong Kong who were interviewed at 6 time points at 6-month intervals between January 2009 and December 2011. Data of the second to the sixth time points were stacked into an individual time point structure. Random-effects probit regression analysis was performed to estimate the relative contribution of the independent variables to the binary dependent variable of drug use in the last 30 days. A permissive attitude towards drug use, life satisfaction, and depression at the concurrent time point, and self-esteem at the previous time point had direct effects on drug use in the last 30 days. Interestingly, permissiveness to occasional drug use was a stronger predictor of drug use than permissiveness to regular drug use. These 2 permissive attitude variables were affected by the belief that doing extreme things shows the vitality of young people (at concurrent time point), life satisfaction (at concurrent time point), and self-esteem (at concurrent and previous time points). Life satisfaction was affected by sense of uncertainty about the future (at concurrent time point), self-esteem (at concurrent time point), depression (at both concurrent and previous time points), and being stricken by stressful events (at previous time point). A number of psychosocial factors could affect the continuation or discontinuation of drug use, as well as the permissive attitude towards regular and occasional drug use, and life satisfaction. Implications of the findings for prevention and intervention work targeted at psychoactive drug users are discussed.
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-01-01
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-12-24
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
Effect of processing conditions on oil point pressure of moringa oleifera seed.
Aviara, N A; Musa, W B; Owolarafe, O K; Ogunsina, B S; Oluwole, F A
2015-07-01
Seed oil expression is an important economic venture in rural Nigeria. The traditional techniques of carrying out the operation is not only energy sapping and time consuming but also wasteful. In order to reduce the tedium involved in the expression of oil from moringa oleifera seed and develop efficient equipment for carrying out the operation, the oil point pressure of the seed was determined under different processing conditions using a laboratory press. The processing conditions employed were moisture content (4.78, 6.00, 8.00 and 10.00 % wet basis), heating temperature (50, 70, 85 and 100 °C) and heating time (15, 20, 25 and 30 min). Results showed that the oil point pressure increased with increase in seed moisture content, but decreased with increase in heating temperature and heating time within the above ranges. Highest oil point pressure value of 1.1239 MPa was obtained at the processing conditions of 10.00 % moisture content, 50 °C heating temperature and 15 min heating time. The lowest oil point pressure obtained was 0.3164 MPa and it occurred at the moisture content of 4.78 %, heating temperature of 100 °C and heating time of 30 min. Analysis of Variance (ANOVA) showed that all the processing variables and their interactions had significant effect on the oil point pressure of moringa oleifera seed at 1 % level of significance. This was further demonstrated using Response Surface Methodology (RSM). Tukey's test and Duncan's Multiple Range Analysis successfully separated the means and a multiple regression equation was used to express the relationship existing between the oil point pressure of moringa oleifera seed and its moisture content, processing temperature, heating time and their interactions. The model yielded coefficients that enabled the oil point pressure of the seed to be predicted with very high coefficient of determination.
GPS Position Time Series @ JPL
NASA Technical Reports Server (NTRS)
Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen
2013-01-01
Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis
Dynamic analysis of suspension cable based on vector form intrinsic finite element method
NASA Astrophysics Data System (ADS)
Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun
2017-10-01
A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.
Common pitfalls in statistical analysis: The perils of multiple testing
Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc
2016-01-01
Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478
Applied Time Domain Stability Margin Assessment for Nonlinear Time-Varying Systems
NASA Technical Reports Server (NTRS)
Kiefer, J. M.; Johnson, M. D.; Wall, J. H.; Dominguez, A.
2016-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation. This technique was implemented by using the Stability Aerospace Vehicle Analysis Tool (SAVANT) computer simulation to evaluate the stability of the SLS system with the Adaptive Augmenting Control (AAC) active and inactive along its ascent trajectory. The gains for which the vehicle maintains apparent time-domain stability defines the gain margins, and the time delay similarly defines the phase margin. This method of extracting the control stability margins from the time-domain simulation is relatively straightforward and the resultant margins can be compared to the linearized system results. The sections herein describe the techniques employed to extract the time-domain margins, compare the results between these nonlinear and the linear methods, and provide explanations for observed discrepancies. The SLS ascent trajectory was simulated with SAVANT and the classical linear stability margins were evaluated at one second intervals. The linear analysis was performed with the AAC algorithm disabled to attain baseline stability margins. At each time point, the system was linearized about the current operating point using Simulink's built-in solver. Each linearized system in time was evaluated for its rigid-body gain margin (high frequency gain margin), rigid-body phase margin, and aero gain margin (low frequency gain margin) for each control axis. Using the stability margins derived from the baseline linearization approach, the time domain derived stability margins were determined by executing time domain simulations in which axis-specific incremental gain and phase adjustments were made to the nominal system about the expected neutral stability point at specific flight times. The baseline stability margin time histories were used to shift the system gain to various values around the zero margin point such that a precise amount of expected gain margin was maintained throughout flight. When assessing the gain margins, the gain was applied starting at the time point under consideration, thereafter following the variation in the margin found in the linear analysis. When assessing the rigid-body phase margin, a constant time delay was applied to the system starting at the time point under consideration. If the baseline stability margins were correctly determined via the linear analysis, the time domain simulation results should contain unstable behavior at certain gain and phase values. Examples will be shown from repeated simulations with variable added gain and phase lag. Faithfulness of margins calculated from the linear analysis to the nonlinear system will be demonstrated.
Investigating the Accuracy of Point Clouds Generated for Rock Surfaces
NASA Astrophysics Data System (ADS)
Seker, D. Z.; Incekara, A. H.
2016-12-01
Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.
Spatial Correlation of Solar-Wind Turbulence from Two-Point Measurements
NASA Technical Reports Server (NTRS)
Matthaeus, W. H.; Milano, L. J.; Dasso, S.; Weygand, J. M.; Smith, C. W.; Kivelson, M. G.
2005-01-01
Interplanetary turbulence, the best studied case of low frequency plasma turbulence, is the only directly quantified instance of astrophysical turbulence. Here, magnetic field correlation analysis, using for the first time only proper two-point, single time measurements, provides a key step in unraveling the space-time structure of interplanetary turbulence. Simultaneous magnetic field data from the Wind, ACE, and Cluster spacecraft are analyzed to determine the correlation (outer) scale, and the Taylor microscale near Earth's orbit.
Covic, Tanya; Pallant, Julie F; Conaghan, Philip G; Tennant, Alan
2007-01-01
Background The aim of this study was to test the internal validity of the total Center for Epidemiologic Studies-Depression (CES-D) scale using Rasch analysis in a rheumatoid arthritis (RA) population. Methods CES-D was administered to 157 patients with RA over three time points within a 12 month period. Rasch analysis was applied using RUMM2020 software to assess the overall fit of the model, the response scale used, individual item fit, differential item functioning (DIF) and person separation. Results Pooled data across three time points was shown to fit the Rasch model with removal of seven items from the original 20-item CES-D scale. It was necessary to rescore the response format from four to three categories in order to improve the scale's fit. Two items demonstrated some DIF for age and gender but were retained within the 13-item CES-D scale. A new cut point for depression score of 9 was found to correspond to the original cut point score of 16 in the full CES-D scale. Conclusion This Rasch analysis of the CES-D in a longstanding RA cohort resulted in the construction of a modified 13-item scale with good internal validity. Further validation of the modified scale is recommended particularly in relation to the new cut point for depression. PMID:17629902
Portable Dew Point Mass Spectrometry System for Real-Time Gas and Moisture Analysis
NASA Technical Reports Server (NTRS)
Arkin, C.; Gillespie, Stacey; Ratzel, Christopher
2010-01-01
A portable instrument incorporates both mass spectrometry and dew point measurement to provide real-time, quantitative gas measurements of helium, nitrogen, oxygen, argon, and carbon dioxide, along with real-time, quantitative moisture analysis. The Portable Dew Point Mass Spectrometry (PDP-MS) system comprises a single quadrupole mass spectrometer and a high vacuum system consisting of a turbopump and a diaphragm-backing pump. A capacitive membrane dew point sensor was placed upstream of the MS, but still within the pressure-flow control pneumatic region. Pressure-flow control was achieved with an upstream precision metering valve, a capacitance diaphragm gauge, and a downstream mass flow controller. User configurable LabVIEW software was developed to provide real-time concentration data for the MS, dew point monitor, and sample delivery system pressure control, pressure and flow monitoring, and recording. The system has been designed to include in situ, NIST-traceable calibration. Certain sample tubing retains sufficient water that even if the sample is dry, the sample tube will desorb water to an amount resulting in moisture concentration errors up to 500 ppm for as long as 10 minutes. It was determined that Bev-A-Line IV was the best sample line to use. As a result of this issue, it is prudent to add a high-level humidity sensor to PDP-MS so such events can be prevented in the future.
ERIC Educational Resources Information Center
Thum, Yeow Meng; Bhattacharya, Suman Kumar
To better describe individual behavior within a system, this paper uses a sample of longitudinal test scores from a large urban school system to consider hierarchical Bayes estimation of a multilevel linear regression model in which each individual regression slope of test score on time switches at some unknown point in time, "kj."…
NASA Astrophysics Data System (ADS)
Huynh, Benjamin Q.; Antropova, Natasha; Giger, Maryellen L.
2017-03-01
DCE-MRI datasets have a temporal aspect to them, resulting in multiple regions of interest (ROIs) per subject, based on contrast time points. It is unclear how the different contrast time points vary in terms of usefulness for computer-aided diagnosis tasks in conjunction with deep learning methods. We thus sought to compare the different DCE-MRI contrast time points with regard to how well their extracted features predict response to neoadjuvant chemotherapy within a deep convolutional neural network. Our dataset consisted of 561 ROIs from 64 subjects. Each subject was categorized as a non-responder or responder, determined by recurrence-free survival. First, features were extracted from each ROI using a convolutional neural network (CNN) pre-trained on non-medical images. Linear discriminant analysis classifiers were then trained on varying subsets of these features, based on their contrast time points of origin. Leave-one-out cross validation (by subject) was used to assess performance in the task of estimating probability of response to therapy, with area under the ROC curve (AUC) as the metric. The classifier trained on features from strictly the pre-contrast time point performed the best, with an AUC of 0.85 (SD = 0.033). The remaining classifiers resulted in AUCs ranging from 0.71 (SD = 0.028) to 0.82 (SD = 0.027). Overall, we found the pre-contrast time point to be the most effective at predicting response to therapy and that including additional contrast time points moderately reduces variance.
Registered Replication Report: Rand, Greene, and Nowak (2012).
Bouwmeester, S; Verkoeijen, P P J L; Aczel, B; Barbosa, F; Bègue, L; Brañas-Garza, P; Chmura, T G H; Cornelissen, G; Døssing, F S; Espín, A M; Evans, A M; Ferreira-Santos, F; Fiedler, S; Flegr, J; Ghaffari, M; Glöckner, A; Goeschl, T; Guo, L; Hauser, O P; Hernan-Gonzalez, R; Herrero, A; Horne, Z; Houdek, P; Johannesson, M; Koppel, L; Kujal, P; Laine, T; Lohse, J; Martins, E C; Mauro, C; Mischkowski, D; Mukherjee, S; Myrseth, K O R; Navarro-Martínez, D; Neal, T M S; Novakova, J; Pagà, R; Paiva, T O; Palfi, B; Piovesan, M; Rahal, R-M; Salomon, E; Srinivasan, N; Srivastava, A; Szaszi, B; Szollosi, A; Thor, K Ø; Tinghög, G; Trueblood, J S; Van Bavel, J J; van 't Veer, A E; Västfjäll, D; Warner, M; Wengström, E; Wills, J; Wollbrant, C E
2017-05-01
In an anonymous 4-person economic game, participants contributed more money to a common project (i.e., cooperated) when required to decide quickly than when forced to delay their decision (Rand, Greene & Nowak, 2012), a pattern consistent with the social heuristics hypothesis proposed by Rand and colleagues. The results of studies using time pressure have been mixed, with some replication attempts observing similar patterns (e.g., Rand et al., 2014) and others observing null effects (e.g., Tinghög et al., 2013; Verkoeijen & Bouwmeester, 2014). This Registered Replication Report (RRR) assessed the size and variability of the effect of time pressure on cooperative decisions by combining 21 separate, preregistered replications of the critical conditions from Study 7 of the original article (Rand et al., 2012). The primary planned analysis used data from all participants who were randomly assigned to conditions and who met the protocol inclusion criteria (an intent-to-treat approach that included the 65.9% of participants in the time-pressure condition and 7.5% in the forced-delay condition who did not adhere to the time constraints), and we observed a difference in contributions of -0.37 percentage points compared with an 8.6 percentage point difference calculated from the original data. Analyzing the data as the original article did, including data only for participants who complied with the time constraints, the RRR observed a 10.37 percentage point difference in contributions compared with a 15.31 percentage point difference in the original study. In combination, the results of the intent-to-treat analysis and the compliant-only analysis are consistent with the presence of selection biases and the absence of a causal effect of time pressure on cooperation.
Registered Replication Report: Rand, Greene, and Nowak (2012)
Bouwmeester, S.; Verkoeijen, P. P. J. L.; Aczel, B.; Barbosa, F.; Bègue, L.; Brañas-Garza, P.; Chmura, T. G. H.; Cornelissen, G.; Døssing, F. S.; Espín, A. M.; Evans, A. M.; Ferreira-Santos, F.; Fiedler, S.; Flegr, J.; Ghaffari, M.; Glöckner, A.; Goeschl, T.; Guo, L.; Hauser, O. P.; Hernan-Gonzalez, R.; Herrero, A.; Horne, Z.; Houdek, P.; Johannesson, M.; Koppel, L.; Kujal, P.; Laine, T.; Lohse, J.; Martins, E. C.; Mauro, C.; Mischkowski, D.; Mukherjee, S.; Myrseth, K. O. R.; Navarro-Martínez, D.; Neal, T. M. S.; Novakova, J.; Pagà, R.; Paiva, T. O.; Palfi, B.; Piovesan, M.; Rahal, R.-M.; Salomon, E.; Srinivasan, N.; Srivastava, A.; Szaszi, B.; Szollosi, A.; Thor, K. Ø.; Tinghög, G.; Trueblood, J. S.; Van Bavel, J. J.; van ‘t Veer, A. E.; Västfjäll, D.; Warner, M.; Wengström, E.; Wills, J.; Wollbrant, C. E.
2017-01-01
In an anonymous 4-person economic game, participants contributed more money to a common project (i.e., cooperated) when required to decide quickly than when forced to delay their decision (Rand, Greene & Nowak, 2012), a pattern consistent with the social heuristics hypothesis proposed by Rand and colleagues. The results of studies using time pressure have been mixed, with some replication attempts observing similar patterns (e.g., Rand et al., 2014) and others observing null effects (e.g., Tinghög et al., 2013; Verkoeijen & Bouwmeester, 2014). This Registered Replication Report (RRR) assessed the size and variability of the effect of time pressure on cooperative decisions by combining 21 separate, preregistered replications of the critical conditions from Study 7 of the original article (Rand et al., 2012). The primary planned analysis used data from all participants who were randomly assigned to conditions and who met the protocol inclusion criteria (an intent-to-treat approach that included the 65.9% of participants in the time-pressure condition and 7.5% in the forced-delay condition who did not adhere to the time constraints), and we observed a difference in contributions of −0.37 percentage points compared with an 8.6 percentage point difference calculated from the original data. Analyzing the data as the original article did, including data only for participants who complied with the time constraints, the RRR observed a 10.37 percentage point difference in contributions compared with a 15.31 percentage point difference in the original study. In combination, the results of the intent-to-treat analysis and the compliant-only analysis are consistent with the presence of selection biases and the absence of a causal effect of time pressure on cooperation. PMID:28475467
2015-08-01
optimized space-time interpolation method. Tangible geospatial modeling system was further developed to support the analysis of changing elevation surfaces...Evolution Mapped by Terrestrial Laser Scanning, talk, AGU Fall 2012 *Hardin E, Mitas L, Mitasova H., Simulation of Wind -Blown Sand for...Geomorphological Applications: A Smoothed Particle Hydrodynamics Approach, GSA 2012 *Russ, E. Mitasova, H., Time series and space-time cube analyses on
A Multivariate Model for the Meta-Analysis of Study Level Survival Data at Multiple Times
ERIC Educational Resources Information Center
Jackson, Dan; Rollins, Katie; Coughlin, Patrick
2014-01-01
Motivated by our meta-analytic dataset involving survival rates after treatment for critical leg ischemia, we develop and apply a new multivariate model for the meta-analysis of study level survival data at multiple times. Our data set involves 50 studies that provide mortality rates at up to seven time points, which we model simultaneously, and…
Doherty, Cailbhe; Bleakley, Chris; Hertel, Jay; Caulfield, Brian; Ryan, John; Delahunt, Eamonn
2016-04-01
Impairments in motor control may predicate the paradigm of chronic ankle instability (CAI) that can develop in the year after an acute lateral ankle sprain (LAS) injury. No prospective analysis is currently available identifying the mechanisms by which these impairments develop and contribute to long-term outcome after LAS. To identify the motor control deficits predicating CAI outcome after a first-time LAS injury. Cohort study (diagnosis); Level of evidence, 2. Eighty-two individuals were recruited after sustaining a first-time LAS injury. Several biomechanical analyses were performed for these individuals, who completed 5 movement tasks at 3 time points: (1) 2 weeks, (2) 6 months, and (3) 12 months after LAS occurrence. A logistic regression analysis of several "salient" biomechanical parameters identified from the movement tasks, in addition to scores from the Cumberland Ankle Instability Tool and the Foot and Ankle Ability Measure (FAAM) recorded at the 2-week and 6-month time points, were used as predictors of 12-month outcome. At the 2-week time point, an inability to complete 2 of the movement tasks (a single-leg drop landing and a drop vertical jump) was predictive of CAI outcome and correctly classified 67.6% of cases (sensitivity, 83%; specificity, 55%; P = .004). At the 6-month time point, several deficits exhibited by the CAI group during 1 of the movement tasks (reach distances and sagittal plane joint positions at the hip, knee and ankle during the posterior reach directions of the Star Excursion Balance Test) and their scores on the activities of daily living subscale of the FAAM were predictive of outcome and correctly classified 84.8% of cases (sensitivity, 75%; specificity, 91%; P < .001). An inability to complete jumping and landing tasks within 2 weeks of a first-time LAS and poorer dynamic postural control and lower self-reported function 6 months after a first-time LAS were predictive of eventual CAI outcome. © 2016 The Author(s).
2009-04-01
equation. The Podoll and Parish low temperature measured vapor pressure data (-35 and -25 °C) were included in our analysis . Penski summarized the...existing literature data for GB in his 1994 data review and analysis .6 He did not include the 0 °C Podoll and Parish measured vapor pressure data point...35.9 Pa) in his analysis because the error associated with this point was Ŗ to 10 times greater than the other values". He did not include the -10 °C
About plasma points' generation in Z-pinch
NASA Astrophysics Data System (ADS)
Afonin, V. I.; Potapov, A. V.; Lazarchuk, V. P.; Murugov, V. M.; Senik, A. V.
1997-05-01
The streak tube study results (at visible and x-ray ranges) of dynamics of fast Z-pinch formed at explosion of metal wire in diode of high current generator are presented. Amplitude of current in the load reached ˜180 kA at increase time ˜50 ns. The results' analysis points to capability of controlling hot plasma points generation process in Z-pinch.
Efficient Analysis of Mass Spectrometry Data Using the Isotope Wavelet
NASA Astrophysics Data System (ADS)
Hussong, Rene; Tholey, Andreas; Hildebrandt, Andreas
2007-09-01
Mass spectrometry (MS) has become today's de-facto standard for high-throughput analysis in proteomics research. Its applications range from toxicity analysis to MS-based diagnostics. Often, the time spent on the MS experiment itself is significantly less than the time necessary to interpret the measured signals, since the amount of data can easily exceed several gigabytes. In addition, automated analysis is hampered by baseline artifacts, chemical as well as electrical noise, and an irregular spacing of data points. Thus, filtering techniques originating from signal and image analysis are commonly employed to address these problems. Unfortunately, smoothing, base-line reduction, and in particular a resampling of data points can affect important characteristics of the experimental signal. To overcome these problems, we propose a new family of wavelet functions based on the isotope wavelet, which is hand-tailored for the analysis of mass spectrometry data. The resulting technique is theoretically well-founded and compares very well with standard peak picking tools, since it is highly robust against noise spoiling the data, but at the same time sufficiently sensitive to detect even low-abundant peptides.
NASA Astrophysics Data System (ADS)
Nassiri, Isar; Lombardo, Rosario; Lauria, Mario; Morine, Melissa J.; Moyseos, Petros; Varma, Vijayalakshmi; Nolen, Greg T.; Knox, Bridgett; Sloper, Daniel; Kaput, Jim; Priami, Corrado
2016-07-01
The investigation of the complex processes involved in cellular differentiation must be based on unbiased, high throughput data processing methods to identify relevant biological pathways. A number of bioinformatics tools are available that can generate lists of pathways ranked by statistical significance (i.e. by p-value), while ideally it would be desirable to functionally score the pathways relative to each other or to other interacting parts of the system or process. We describe a new computational method (Network Activity Score Finder - NASFinder) to identify tissue-specific, omics-determined sub-networks and the connections with their upstream regulator receptors to obtain a systems view of the differentiation of human adipocytes. Adipogenesis of human SBGS pre-adipocyte cells in vitro was monitored with a transcriptomic data set comprising six time points (0, 6, 48, 96, 192, 384 hours). To elucidate the mechanisms of adipogenesis, NASFinder was used to perform time-point analysis by comparing each time point against the control (0 h) and time-lapse analysis by comparing each time point with the previous one. NASFinder identified the coordinated activity of seemingly unrelated processes between each comparison, providing the first systems view of adipogenesis in culture. NASFinder has been implemented into a web-based, freely available resource associated with novel, easy to read visualization of omics data sets and network modules.
Statistical aspects of point count sampling
Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.
Recurrence Density Enhanced Complex Networks for Nonlinear Time Series Analysis
NASA Astrophysics Data System (ADS)
Costa, Diego G. De B.; Reis, Barbara M. Da F.; Zou, Yong; Quiles, Marcos G.; Macau, Elbert E. N.
We introduce a new method, which is entitled Recurrence Density Enhanced Complex Network (RDE-CN), to properly analyze nonlinear time series. Our method first transforms a recurrence plot into a figure of a reduced number of points yet preserving the main and fundamental recurrence properties of the original plot. This resulting figure is then reinterpreted as a complex network, which is further characterized by network statistical measures. We illustrate the computational power of RDE-CN approach by time series by both the logistic map and experimental fluid flows, which show that our method distinguishes different dynamics sufficiently well as the traditional recurrence analysis. Therefore, the proposed methodology characterizes the recurrence matrix adequately, while using a reduced set of points from the original recurrence plots.
Júnez-Ferreira, H E; Herrera, G S
2013-04-01
This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.
Muguruma, Masako; Nishimura, Jihei; Jin, Meilan; Kashida, Yoko; Moto, Mitsuyoshi; Takahashi, Miwa; Yokouchi, Yusuke; Mitsumori, Kunitoshi
2006-12-07
Piperonyl butoxide (PBO), alpha-[2-(2-butoxyethoxy)ethoxy]-4,5-methylene-dioxy-2-propyltoluene, is widely used as a synergist for pyrethrins. In order to clarify the possible mechanism of non-genotoxic hepatocarcinogenesis induced by PBO, molecular pathological analyses consisting of low-density microarray analysis and real-time reverse transcriptase (RT)-PCR were performed in male ICR mice fed a basal powdered diet containing 6000 or 0 ppm PBO for 1, 4, or 8 weeks. The animals were sacrificed at weeks 1, 4, and 8, and the livers were histopathologically examined and analyzed for gene expression using the microarray at weeks 1 and 4 followed by real-time RT-PCR at each time point. Reactive oxygen species (ROS) products were also measured using liver microsomes. At each time point, the hepatocytes of PBO-treated mice showed centrilobular hypertrophy and increased lipofuscin deposition in Schmorl staining. The ROS products were significantly increased in the liver microsomes of PBO-treated mice. In the microarray analysis, the expression of oxidative and metabolic stress-related genes--cytochrome P450 (Cyp) 1A1, Cyp2A5 (week 1 only), Cyp2B9, Cyp2B10, and NADPH-cytochrome P450 oxidoreductase (Por) was over-expressed in mice given PBO at weeks 1 and 4. Fluctuations of these genes were confirmed by real-time RT-PCR in PBO-treated mice at each time point. In additional real-time RT-PCR, the expression of Cyclin D1 gene, key regulator of cell-cycle progression, and Xrcc5 gene, DNA damage repair-related gene, was significantly increased at each time point and at week 8, respectively. These results suggest the possibility that PBO has the potential to generate ROS via the metabolic pathway and to induce oxidative stress, including oxidative DNA damage, resulting in the induction of hepatocellular tumors in mice.
Background/Question/Methods Bacterial pathogens in surface water present disease risks to aquatic communities and for human recreational activities. Sources of these pathogens include runoff from urban, suburban, and agricultural point and non-point sources, but hazardous micr...
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2016-12-01
In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
NASA Technical Reports Server (NTRS)
Panda, J.; Roozeboom, N. H.; Ross, J. C.
2016-01-01
The recent advancement in fast-response Pressure-Sensitive Paint (PSP) allows time-resolved measurements of unsteady pressure fluctuations from a dense grid of spatial points on a wind tunnel model. This capability allows for direct calculations of the wavenumber-frequency (k-?) spectrum of pressure fluctuations. Such data, useful for the vibro-acoustics analysis of aerospace vehicles, are difficult to obtain otherwise. For the present work, time histories of pressure fluctuations on a flat plate subjected to vortex shedding from a rectangular bluff-body were measured using PSP. The light intensity levels in the photographic images were then converted to instantaneous pressure histories by applying calibration constants, which were calculated from a few dynamic pressure sensors placed at selective points on the plate. Fourier transform of the time-histories from a large number of spatial points provided k-? spectra for pressure fluctuations. The data provides first glimpse into the possibility of creating detailed forcing functions for vibro-acoustics analysis of aerospace vehicles, albeit for a limited frequency range.
Phase transition of charged-AdS black holes and quasinormal modes: A time domain analysis
NASA Astrophysics Data System (ADS)
Chabab, M.; El Moumni, H.; Iraoui, S.; Masmar, K.
2017-10-01
In this work, we investigate the time evolution of a massless scalar perturbation around small and large RN-AdS4 black holes for the purpose of probing the thermodynamic phase transition. We show that below the critical point the scalar perturbation decays faster with increasing of the black hole size, both for small and large black hole phases. Our analysis of the time profile of quasinormal mode reveals a sharp distinction between the behaviors of both phases, providing a reliable tool to probe the black hole phase transition. However at the critical point P=Pc, as the black hole size extends, we note that the damping time increases and the perturbation decays faster, the oscillation frequencies raise either in small and large black hole phase. In this case the time evolution approach fails to track the AdS4 black hole phase.
Interpolation of longitudinal shape and image data via optimal mass transport
NASA Astrophysics Data System (ADS)
Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen
2014-03-01
Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.
He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing
2017-11-02
Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.
Autoregressive-model-based missing value estimation for DNA microarray time series data.
Choong, Miew Keen; Charbit, Maurice; Yan, Hong
2009-01-01
Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.
High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis
Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher
2015-01-01
Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87 m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2 cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.
NASA Technical Reports Server (NTRS)
Lutchke, Scott B.; Rowlands, David D.; Harding, David J.; Bufton, Jack L.; Carabajal, Claudia C.; Williams, Teresa A.
2003-01-01
On January 12, 2003 the Ice, Cloud and land Elevation Satellite (ICESat) was successfUlly placed into orbit. The ICESat mission carries the Geoscience Laser Altimeter System (GLAS), which consists of three near-infrared lasers that operate at 40 short pulses per second. The instrument has collected precise elevation measurements of the ice sheets, sea ice roughness and thickness, ocean and land surface elevations and surface reflectivity. The accurate geolocation of GLAS's surface returns, the spots from which the laser energy reflects on the Earth's surface, is a critical issue in the scientific application of these data Pointing, ranging, timing and orbit errors must be compensated to accurately geolocate the laser altimeter surface returns. Towards this end, the laser range observations can be fully exploited in an integrated residual analysis to accurately calibrate these geolocation/instrument parameters. Early mission ICESat data have been simultaneously processed as direct altimetry from ocean sweeps along with dynamic crossovers resulting in a preliminary calibration of laser pointing, ranging and timing. The calibration methodology and early mission analysis results are summarized in this paper along with future calibration activities
Chen, Song; Li, Xuena; Chen, Meijie; Yin, Yafu; Li, Na; Li, Yaming
2016-10-01
This study is aimed to compare the diagnostic power of using quantitative analysis or visual analysis with single time point imaging (STPI) PET/CT and dual time point imaging (DTPI) PET/CT for the classification of solitary pulmonary nodules (SPN) lesions in granuloma-endemic regions. SPN patients who received early and delayed (18)F-FDG PET/CT at 60min and 180min post-injection were retrospectively reviewed. Diagnoses are confirmed by pathological results or follow-ups. Three quantitative metrics, early SUVmax, delayed SUVmax and retention index(the percentage changes between the early SUVmax and delayed SUVmax), were measured for each lesion. Three 5-point scale score was given by blinded interpretations performed by physicians based on STPI PET/CT images, DTPI PET/CT images and CT images, respectively. ROC analysis was performed on three quantitative metrics and three visual interpretation scores. One-hundred-forty-nine patients were retrospectively included. The areas under curve (AUC) of the ROC curves of early SUVmax, delayed SUVmax, RI, STPI PET/CT score, DTPI PET/CT score and CT score are 0.73, 0.74, 0.61, 0.77 0.75 and 0.76, respectively. There were no significant differences between the AUCs in visual interpretation of STPI PET/CT images and DTPI PET/CT images, nor in early SUVmax and delayed SUVmax. The differences of sensitivity, specificity and accuracy between STPI PET/CT and DTPI PET/CT were not significantly different in either quantitative analysis or visual interpretation. In granuloma-endemic regions, DTPI PET/CT did not offer significant improvement over STPI PET/CT in differentiating malignant SPNs in both quantitative analysis and visual interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Influencing Factors of the Initiation Point in the Parachute-Bomb Dynamic Detonation System
NASA Astrophysics Data System (ADS)
Qizhong, Li; Ye, Wang; Zhongqi, Wang; Chunhua, Bai
2017-12-01
The parachute system has been widely applied in modern armament design, especially for the fuel-air explosives. Because detonation of fuel-air explosives occurs during flight, it is necessary to investigate the influences of the initiation point to ensure successful dynamic detonation. In fact, the initiating position exist the falling area in the fuels, due to the error of influencing factors. In this paper, the major influencing factors of initiation point were explored with airdrop and the regularity between initiation point area and factors were obtained. Based on the regularity, the volume equation of initiation point area was established to predict the range of initiation point in the fuel. The analysis results showed that the initiation point appeared area, scattered on account of the error of attitude angle, secondary initiation charge velocity, and delay time. The attitude angle was the major influencing factors on a horizontal axis. On the contrary, secondary initiation charge velocity and delay time were the major influencing factors on a horizontal axis. Overall, the geometries of initiation point area were sector coupled with the errors of the attitude angle, secondary initiation charge velocity, and delay time.
Elastohydrodynamic lubrication theory
NASA Technical Reports Server (NTRS)
Hamrock, B. J.; Dowson, D.
1982-01-01
The isothermal elastohydrodynamic lubrication (EHL) of a point contact was analyzed numerically by simultaneously solving the elasticity and Reynolds equations. In the elasticity analysis the contact zone was divided into equal rectangular areas, and it was assumed that a uniform pressure was applied over each area. In the numerical analysis of the Reynolds equation, a phi analysis (where phi is equal to the pressure times the film thickness to the 3/2 power) was used to help the relaxation process. The EHL point contact analysis is applicable for the entire range of elliptical parameters and is valid for any combination of rolling and sliding within the contact.
Topological photonic crystal with ideal Weyl points
NASA Astrophysics Data System (ADS)
Wang, Luyang; Jian, Shao-Kai; Yao, Hong
Weyl points in three-dimensional photonic crystals behave as monopoles of Berry flux in momentum space. Here, based on symmetry analysis, we show that a minimal number of symmetry-related Weyl points can be realized in time-reversal invariant photonic crystals. We propose to realize these ``ideal'' Weyl points in modified double-gyroid photonic crystals, which is confirmed by our first-principle photonic band-structure calculations. Photonic crystals with ideal Weyl points are qualitatively advantageous in applications such as angular and frequency selectivity, broadband invisibility cloaking, and broadband 3D-imaging.
NASA Astrophysics Data System (ADS)
Saberi, Elaheh; Reza Hejazi, S.
2018-02-01
In the present paper, Lie point symmetries of the time-fractional generalized Hirota-Satsuma coupled KdV (HS-cKdV) system based on the Riemann-Liouville derivative are obtained. Using the derived Lie point symmetries, we obtain similarity reductions and conservation laws of the considered system. Finally, some analytic solutions are furnished by means of the invariant subspace method in the Caputo sense.
Negatively valenced expectancy violation predicts emotionality: A longitudinal analysis.
Bettencourt, B Ann; Manning, Mark
2016-09-01
We hypothesized that negatively valenced expectancy violations about the quality of 1's life would predict negative emotionality. We tested this hypothesis in a 4-wave longitudinal study of breast cancer survivors. The findings showed that higher levels of negatively valenced expectancy violation, at earlier time points, were associated with greater negative emotionality, at later time points. Implications of the findings are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Motion illusions in optical art presented for long durations are temporally distorted.
Nather, Francisco Carlos; Mecca, Fernando Figueiredo; Bueno, José Lino Oliveira
2013-01-01
Static figurative images implying human body movements observed for shorter and longer durations affect the perception of time. This study examined whether images of static geometric shapes would affect the perception of time. Undergraduate participants observed two Optical Art paintings by Bridget Riley for 9 or 36 s (group G9 and G36, respectively). Paintings implying different intensities of movement (2.0 and 6.0 point stimuli) were randomly presented. The prospective paradigm in the reproduction method was used to record time estimations. Data analysis did not show time distortions in the G9 group. In the G36 group the paintings were differently perceived: that for the 2.0 point one are estimated to be shorter than that for the 6.0 point one. Also for G36, the 2.0 point painting was underestimated in comparison with the actual time of exposure. Motion illusions in static images affected time estimation according to the attention given to the complexity of movement by the observer, probably leading to changes in the storage velocity of internal clock pulses.
Connections between Transcription Downstream of Genes and cis-SAGe Chimeric RNA.
Chwalenia, Katarzyna; Qin, Fujun; Singh, Sandeep; Tangtrongstittikul, Panjapon; Li, Hui
2017-11-22
cis-Splicing between adjacent genes (cis-SAGe) is being recognized as one way to produce chimeric fusion RNAs. However, its detail mechanism is not clear. Recent study revealed induction of transcriptions downstream of genes (DoGs) under osmotic stress. Here, we investigated the influence of osmotic stress on cis-SAGe chimeric RNAs and their connection to DoGs. We found,the absence of induction of at least some cis-SAGe fusions and/or their corresponding DoGs at early time point(s). In fact, these DoGs and their cis-SAGe fusions are inversely correlated. This negative correlation was changed to positive at a later time point. These results suggest a direct competition between the two categories of transcripts when total pool of readthrough transcripts is limited at an early time point. At a later time point, DoGs and corresponding cis-SAGe fusions are both induced, indicating that total readthrough transcripts become more abundant. Finally, we observed overall enhancement of cis-SAGe chimeric RNAs in KCl-treated samples by RNA-Seq analysis.
Lung function in type 2 diabetes: the Normative Aging Study.
Litonjua, Augusto A; Lazarus, Ross; Sparrow, David; Demolles, Debbie; Weiss, Scott T
2005-12-01
Cross-sectional studies have noted that subjects with diabetes have lower lung function than non-diabetic subjects. We conducted this analysis to determine whether diabetic subjects have different rates of lung function change compared with non-diabetic subjects. We conducted a nested case-control analysis in 352 men who developed diabetes and 352 non-diabetic subjects in a longitudinal observational study of aging in men. We assessed lung function among cases and controls at three time points: Time0, prior to meeting the definition of diabetes; Time1, the point when the definition of diabetes was met; and Time2, the most recent follow-up exam. Cases had lower forced expiratory volume in 1s (FEV1) and forced vital capacity (FVC) at all time points, even with adjustment for age, height, weight, and smoking. In multiple linear regression models adjusting for relevant covariates, there were no differences in rates of FEV1 or FVC change over time between cases and controls. Men who are predisposed to develop diabetes have decreased lung function many years prior to the diagnosis, compared with men who do not develop diabetes. This decrement in lung function remains after the development of diabetes. We postulate that mechanisms involved in the insulin resistant state contribute to the diminished lung function observed in our subjects.
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
ERIC Educational Resources Information Center
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
ERIC Educational Resources Information Center
Park, Sanghoon
2017-01-01
This paper reports the findings of a comparative analysis of online learner behavioral interactions, time-on-task, attendance, and performance at different points throughout a semester (beginning, during, and end) based on two online courses: one course offering authentic discussion-based learning activities and the other course offering authentic…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, V; Ruan, D; Nguyen, D
Purpose: To test the potential of early Glioblastoma Multiforme (GBM) recurrence detection utilizing image texture pattern analysis in serial MR images post primary treatment intervention. Methods: MR image-sets of six time points prior to the confirmed recurrence diagnosis of a GBM patient were included in this study, with each time point containing T1 pre-contrast, T1 post-contrast, T2-Flair, and T2-TSE images. Eight Gray-level co-occurrence matrix (GLCM) texture features including Contrast, Correlation, Dissimilarity, Energy, Entropy, Homogeneity, Sum-Average, and Variance were calculated from all images, resulting in a total of 32 features at each time point. A confirmed recurrent volume was contoured, alongmore » with an adjacent non-recurrent region-of-interest (ROI) and both volumes were propagated to all prior time points via deformable image registration. A support vector machine (SVM) with radial-basis-function kernels was trained on the latest time point prior to the confirmed recurrence to construct a model for recurrence classification. The SVM model was then applied to all prior time points and the volumes classified as recurrence were obtained. Results: An increase in classified volume was observed over time as expected. The size of classified recurrence maintained at a stable level of approximately 0.1 cm{sup 3} up to 272 days prior to confirmation. Noticeable volume increase to 0.44 cm{sup 3} was demonstrated at 96 days prior, followed by significant increase to 1.57 cm{sup 3} at 42 days prior. Visualization of the classified volume shows the merging of recurrence-susceptible region as the volume change became noticeable. Conclusion: Image texture pattern analysis in serial MR images appears to be sensitive to detecting the recurrent GBM a long time before the recurrence is confirmed by a radiologist. The early detection may improve the efficacy of targeted intervention including radiosurgery. More patient cases will be included to create a generalizable classification model applicable to a larger patient cohort. NIH R43CA183390 and R01CA188300.NSF Graduate Research Fellowship DGE-1144087.« less
Unsteady three-dimensional thermal field prediction in turbine blades using nonlinear BEM
NASA Technical Reports Server (NTRS)
Martin, Thomas J.; Dulikravich, George S.
1993-01-01
A time-and-space accurate and computationally efficient fully three dimensional unsteady temperature field analysis computer code has been developed for truly arbitrary configurations. It uses boundary element method (BEM) formulation based on an unsteady Green's function approach, multi-point Gaussian quadrature spatial integration on each panel, and a highly clustered time-step integration. The code accepts either temperatures or heat fluxes as boundary conditions that can vary in time on a point-by-point basis. Comparisons of the BEM numerical results and known analytical unsteady results for simple shapes demonstrate very high accuracy and reliability of the algorithm. An example of computed three dimensional temperature and heat flux fields in a realistically shaped internally cooled turbine blade is also discussed.
Study on the stability of adrenaline and on the determination of its acidity constants
NASA Astrophysics Data System (ADS)
Corona-Avendaño, S.; Alarcón-Angeles, G.; Rojas-Hernández, A.; Romero-Romo, M. A.; Ramírez-Silva, M. T.
2005-01-01
In this work, the results are presented concerning the influence of time on the spectral behaviour of adrenaline (C 9H 13NO 3) (AD) and of the determination of its acidity constants by means of spectrophotometry titrations and point-by-point analysis, using for the latter freshly prepared samples for each analysis at every single pH. As the catecholamines are sensitive to light, all samples were protected against it during the course of the experiments. Each method rendered four acidity constants corresponding each to the four acid protons belonging to the functional groups present in the molecule; for the point-by-point analysis the values found were: log β 1=38.25±0.21 , log β 2=29.65±0.17 , log β 3=21.01±0.14 , log β 4=11.34±0.071 .
Cross-correlation of point series using a new method
NASA Technical Reports Server (NTRS)
Strothers, Richard B.
1994-01-01
Traditional methods of cross-correlation of two time series do not apply to point time series. Here, a new method, devised specifically for point series, utilizes a correlation measure that is based in the rms difference (or, alternatively, the median absolute difference) between nearest neightbors in overlapped segments of the two series. Error estimates for the observed locations of the points, as well as a systematic shift of one series with respect to the other to accommodate a constant, but unknown, lead or lag, are easily incorporated into the analysis using Monte Carlo techniques. A methodological restriction adopted here is that one series be treated as a template series against which the other, called the target series, is cross-correlated. To estimate a significance level for the correlation measure, the adopted alternative (null) hypothesis is that the target series arises from a homogeneous Poisson process. The new method is applied to cross-correlating the times of the greatest geomagnetic storms with the times of maximum in the undecennial solar activity cycle.
Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl
2014-01-01
Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420
NASA Astrophysics Data System (ADS)
Lee, Hyeon-Guck; Hong, Seong-Jong; Cho, Jae-Hwan; Han, Man-Seok; Kim, Tae-Hyung; Lee, Ik-Han
2013-02-01
The purpose of this study was to assess and compare the changes in the SUV (standardized uptake value), the 18F-FDG (18F-fluorodeoxyglucose) uptake pattern, and the radioactivity level for the diagnosis of thyroid cancer via dual-time-point 18F-FDG PET/CT (positron emission tomographycomputed tomography) imaging. Moreover, the study aimed to verify the usefulness and significance of SUV values and radioactivity levels to discriminate tumor malignancy. A retrospective analysis was performed on 40 patients who received 18F-FDG PET/CT for thyroid cancer as a primary tumor. To set the background, we compared changes in values by calculating the dispersion of scattered rays in the neck area and the lung apex, and by comparing the mean and SD (standard deviation) values of the maxSUV and the radioactivity levels. According to the statistical analysis of the changes in 18F-FDG uptake for the diagnosis of thyroid cancer, a high similarity was observed with the coefficient of determination being R2 = 0.939, in the SUVs and the radioactivity levels. Moreover, similar results were observed in the assessment of tumor malignancy using dual-time-point. The quantitative analysis method for assessing tumor malignancy using radioactivity levels was neither specific nor discriminative compared to the semi-quantitative analysis method.
Analysis of an inventory model for both linearly decreasing demand and holding cost
NASA Astrophysics Data System (ADS)
Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.
2016-03-01
This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.
Fleming, Denise H; Mathew, Binu S; Prasanna, Samuel; Annapandian, Vellaichamy M; John, George T
2011-04-01
Enteric-coated mycophenolate sodium (EC-MPS) is widely used in renal transplantation. With a delayed absorption profile, it has not been possible to develop limited sampling strategies to estimate area under the curve (mycophenolic acid [MPA] AUC₀₋₁₂), which have limited time points and are completed in 2 hours. We developed and validated simplified strategies to estimate MPA AUC₀₋₁₂ in an Indian renal transplant population prescribed EC-MPS together with prednisolone and tacrolimus. Intensive pharmacokinetic sampling (17 samples each) was performed in 18 patients to measure MPA AUC₀₋₁₂. The profiles at 1 month were used to develop the simplified strategies and those at 5.5 months used for validation. We followed two approaches. In one, the AUC was calculated using the trapezoidal rule with fewer time points followed by an extrapolation. In the second approach, by stepwise multiple regression analysis, models with different time points were identified and linear regression analysis performed. Using the trapezoidal rule, two equations were developed with six time points and sampling to 6 or 8 hours (8hrAUC[₀₋₁₂exp]) after the EC-MPS dose. On validation, the 8hrAUC(₀₋₁₂exp) compared with total measured AUC₀₋₁₂ had a coefficient of correlation (r²) of 0.872 with a bias and precision (95% confidence interval) of 0.54% (-6.07-7.15) and 9.73% (5.37-14.09), respectively. Second, limited sampling strategies were developed with four, five, six, seven, and eight time points and completion within 2 hours, 4 hours, 6 hours, and 8 hours after the EC-MPS dose. On validation, six, seven, and eight time point equations, all with sampling to 8 hours, had an acceptable r with the total measured MPA AUC₀₋₁₂ (0.817-0.927). In the six, seven, and eight time points, the bias (95% confidence interval) was 3.00% (-4.59 to 10.59), 0.29% (-5.4 to 5.97), and -0.72% (-5.34 to 3.89) and the precision (95% confidence interval) was 10.59% (5.06-16.13), 8.33% (4.55-12.1), and 6.92% (3.94-9.90), respectively. Of the eight simplified approaches, inclusion of seven or eight time points improved the accuracy of the predicted AUC compared with the actual and can be advocated based on the priority of the user.
Age-related changes in the function and structure of the peripheral sensory pathway in mice.
Canta, Annalisa; Chiorazzi, Alessia; Carozzi, Valentina Alda; Meregalli, Cristina; Oggioni, Norberto; Bossi, Mario; Rodriguez-Menendez, Virginia; Avezza, Federica; Crippa, Luca; Lombardi, Raffaella; de Vito, Giuseppe; Piazza, Vincenzo; Cavaletti, Guido; Marmiroli, Paola
2016-09-01
This study is aimed at describing the changes occurring in the entire peripheral nervous system sensory pathway along a 2-year observation period in a cohort of C57BL/6 mice. The neurophysiological studies evidenced significant differences in the selected time points corresponding to childhood, young adulthood, adulthood, and aging (i.e., 1, 7, 15, and 25 months of age), with a parabolic course as function of time. The pathological assessment allowed to demonstrate signs of age-related changes since the age of 7 months, with a remarkable increase in both peripheral nerves and dorsal root ganglia at the subsequent time points. These changes were mainly in the myelin sheaths, as also confirmed by the Rotating-Polarization Coherent-Anti-stokes-Raman-scattering microscopy analysis. Evident changes were also present at the morphometric analysis performed on the peripheral nerves, dorsal root ganglia neurons, and skin biopsies. This extensive, multimodal characterization of the peripheral nervous system changes in aging provides the background for future mechanistic studies allowing the selection of the most appropriate time points and readouts according to the investigation aims. Copyright © 2016 Elsevier Inc. All rights reserved.
Sparse electrocardiogram signals recovery based on solving a row echelon-like form of system.
Cai, Pingmei; Wang, Guinan; Yu, Shiwei; Zhang, Hongjuan; Ding, Shuxue; Wu, Zikai
2016-02-01
The study of biology and medicine in a noise environment is an evolving direction in biological data analysis. Among these studies, analysis of electrocardiogram (ECG) signals in a noise environment is a challenging direction in personalized medicine. Due to its periodic characteristic, ECG signal can be roughly regarded as sparse biomedical signals. This study proposes a two-stage recovery algorithm for sparse biomedical signals in time domain. In the first stage, the concentration subspaces are found in advance. Then by exploiting these subspaces, the mixing matrix is estimated accurately. In the second stage, based on the number of active sources at each time point, the time points are divided into different layers. Next, by constructing some transformation matrices, these time points form a row echelon-like system. After that, the sources at each layer can be solved out explicitly by corresponding matrix operations. It is noting that all these operations are conducted under a weak sparse condition that the number of active sources is less than the number of observations. Experimental results show that the proposed method has a better performance for sparse ECG signal recovery problem.
Mudaliar, Manikhandan; Tassi, Riccardo; Thomas, Funmilola C; McNeilly, Tom N; Weidt, Stefan K; McLaughlin, Mark; Wilson, David; Burchmore, Richard; Herzyk, Pawel; Eckersall, P David; Zadoks, Ruth N
2016-08-16
Mastitis, inflammation of the mammary gland, is the most common and costly disease of dairy cattle in the western world. It is primarily caused by bacteria, with Streptococcus uberis as one of the most prevalent causative agents. To characterize the proteome during Streptococcus uberis mastitis, an experimentally induced model of intramammary infection was used. Milk whey samples obtained from 6 cows at 6 time points were processed using label-free relative quantitative proteomics. This proteomic analysis complements clinical, bacteriological and immunological studies as well as peptidomic and metabolomic analysis of the same challenge model. A total of 2552 non-redundant bovine peptides were identified, and from these, 570 bovine proteins were quantified. Hierarchical cluster analysis and principal component analysis showed clear clustering of results by stage of infection, with similarities between pre-infection and resolution stages (0 and 312 h post challenge), early infection stages (36 and 42 h post challenge) and late infection stages (57 and 81 h post challenge). Ingenuity pathway analysis identified upregulation of acute phase protein pathways over the course of infection, with dominance of different acute phase proteins at different time points based on differential expression analysis. Antimicrobial peptides, notably cathelicidins and peptidoglycan recognition protein, were upregulated at all time points post challenge and peaked at 57 h, which coincided with 10 000-fold decrease in average bacterial counts. The integration of clinical, bacteriological, immunological and quantitative proteomics and other-omic data provides a more detailed systems level view of the host response to mastitis than has been achieved previously.
NASA Astrophysics Data System (ADS)
Suhaila, Jamaludin; Yusop, Zulkifli
2017-06-01
Most of the trend analysis that has been conducted has not considered the existence of a change point in the time series analysis. If these occurred, then the trend analysis will not be able to detect an obvious increasing or decreasing trend over certain parts of the time series. Furthermore, the lack of discussion on the possible factors that influenced either the decreasing or the increasing trend in the series needs to be addressed in any trend analysis. Hence, this study proposes to investigate the trends, and change point detection of mean, maximum and minimum temperature series, both annually and seasonally in Peninsular Malaysia and determine the possible factors that could contribute to the significance trends. In this study, Pettitt and sequential Mann-Kendall (SQ-MK) tests were used to examine the occurrence of any abrupt climate changes in the independent series. The analyses of the abrupt changes in temperature series suggested that most of the change points in Peninsular Malaysia were detected during the years 1996, 1997 and 1998. These detection points captured by Pettitt and SQ-MK tests are possibly related to climatic factors, such as El Niño and La Niña events. The findings also showed that the majority of the significant change points that exist in the series are related to the significant trend of the stations. Significant increasing trends of annual and seasonal mean, maximum and minimum temperatures in Peninsular Malaysia were found with a range of 2-5 °C/100 years during the last 32 years. It was observed that the magnitudes of the increasing trend in minimum temperatures were larger than the maximum temperatures for most of the studied stations, particularly at the urban stations. These increases are suspected to be linked with the effect of urban heat island other than El Niño event.
Monitoring urban subsidence based on SAR lnterferometric point target analysis
Zhang, Y.; Zhang, Jiahua; Gong, W.; Lu, Z.
2009-01-01
lnterferometric point target analysis (IPTA) is one of the latest developments in radar interferometric processing. It is achieved by analysis of the interferometric phases of some individual point targets, which are discrete and present temporarily stable backscattering characteristics, in long temporal series of interferometric SAR images. This paper analyzes the interferometric phase model of point targets, and then addresses two key issues within IPTA process. Firstly, a spatial searching method is proposed to unwrap the interferometric phase difference between two neighboring point targets. The height residual error and linear deformation rate of each point target can then be calculated, when a global reference point with known height correction and deformation history is chosen. Secondly, a spatial-temporal filtering scheme is proposed to further separate the atmosphere phase and nonlinear deformation phase from the residual interferometric phase. Finally, an experiment of the developed IPTA methodology is conducted over Suzhou urban area. Totally 38 ERS-1/2 SAR scenes are analyzed, and the deformation information over 3 546 point targets in the time span of 1992-2002 are generated. The IPTA-derived deformation shows very good agreement with the published result, which demonstrates that the IPTA technique can be developed into an operational tool to map the ground subsidence over urban area.
USDA-ARS?s Scientific Manuscript database
RNA expression analysis was performed on the corpus luteum tissue at five time points after prostaglandin F2 alpha treatment of midcycle cows using an Affymetrix Bovine Gene v1 Array. The normalized linear microarray data was uploaded to the NCBI GEO repository (GSE94069). Subsequent statistical ana...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-26
... 500 Index option series in the pilot: (1) A time series analysis of open interest; and (2) an analysis... issue's total market share value, which is the share price times the number of shares outstanding. These... other series. Strike price intervals would be set no less than 5 points apart. Consistent with existing...
Thermal analysis and microstructural characterization of Mg-Al-Zn system alloys
NASA Astrophysics Data System (ADS)
Król, M.; Tański, T.; Sitek, W.
2015-11-01
The influence of Zn amount and solidification rate on the characteristic temperature of the evaluation of magnesium dendrites during solidification at different cooling rates (0.6-2.5°C) were examined by thermal derivative analysis (TDA). The dendrite coherency point (DCP) is presented with a novel approach based on second derivative cooling curve. Solidification behavior was examined via one thermocouple thermal analysis method. Microstructural assessments were described by optical light microscopy, scanning electron microscopy and energy dispersive X-ray spectroscopy. These studies showed that utilization of d2T/dt2 vs. the time curve methodology provides for analysis of the dendrite coherency point
Alkhaldy, Ibrahim
2017-04-01
The aim of this study was to examine the role of environmental factors in the temporal distribution of dengue fever in Jeddah, Saudi Arabia. The relationship between dengue fever cases and climatic factors such as relative humidity and temperature was investigated during 2006-2009 to determine whether there is any relationship between dengue fever cases and climatic parameters in Jeddah City, Saudi Arabia. A generalised linear model (GLM) with a break-point was used to determine how different levels of temperature and relative humidity affected the distribution of the number of cases of dengue fever. Break-point analysis was performed to modelled the effect before and after a break-point (change point) in the explanatory parameters under various scenarios. Akaike information criterion (AIC) and cross validation (CV) were used to assess the performance of the models. The results showed that maximum temperature and mean relative humidity are most probably the better predictors of the number of dengue fever cases in Jeddah. In this study three scenarios were modelled: no time lag, 1-week lag and 2-weeks lag. Among these scenarios, the 1-week lag model using mean relative humidity as an explanatory variable showed better performance. This study showed a clear relationship between the meteorological variables and the number of dengue fever cases in Jeddah. The results also demonstrated that meteorological variables can be successfully used to estimate the number of dengue fever cases for a given period of time. Break-point analysis provides further insight into the association between meteorological parameters and dengue fever cases by dividing the meteorological parameters into certain break-points. Copyright © 2016 Elsevier B.V. All rights reserved.
Semantic focusing allows fully automated single-layer slide scanning of cervical cytology slides.
Lahrmann, Bernd; Valous, Nektarios A; Eisenmann, Urs; Wentzensen, Nicolas; Grabe, Niels
2013-01-01
Liquid-based cytology (LBC) in conjunction with Whole-Slide Imaging (WSI) enables the objective and sensitive and quantitative evaluation of biomarkers in cytology. However, the complex three-dimensional distribution of cells on LBC slides requires manual focusing, long scanning-times, and multi-layer scanning. Here, we present a solution that overcomes these limitations in two steps: first, we make sure that focus points are only set on cells. Secondly, we check the total slide focus quality. From a first analysis we detected that superficial dust can be separated from the cell layer (thin layer of cells on the glass slide) itself. Then we analyzed 2,295 individual focus points from 51 LBC slides stained for p16 and Ki67. Using the number of edges in a focus point image, specific color values and size-inclusion filters, focus points detecting cells could be distinguished from focus points on artifacts (accuracy 98.6%). Sharpness as total focus quality of a virtual LBC slide is computed from 5 sharpness features. We trained a multi-parameter SVM classifier on 1,600 images. On an independent validation set of 3,232 cell images we achieved an accuracy of 94.8% for classifying images as focused. Our results show that single-layer scanning of LBC slides is possible and how it can be achieved. We assembled focus point analysis and sharpness classification into a fully automatic, iterative workflow, free of user intervention, which performs repetitive slide scanning as necessary. On 400 LBC slides we achieved a scanning-time of 13.9±10.1 min with 29.1±15.5 focus points. In summary, the integration of semantic focus information into whole-slide imaging allows automatic high-quality imaging of LBC slides and subsequent biomarker analysis.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
Topological photonic crystal with equifrequency Weyl points
NASA Astrophysics Data System (ADS)
Wang, Luyang; Jian, Shao-Kai; Yao, Hong
2016-06-01
Weyl points in three-dimensional photonic crystals behave as monopoles of Berry flux in momentum space. Here, based on general symmetry analysis, we show that a minimal number of four symmetry-related (consequently equifrequency) Weyl points can be realized in time-reversal invariant photonic crystals. We further propose an experimentally feasible way to modify double-gyroid photonic crystals to realize four equifrequency Weyl points, which is explicitly confirmed by our first-principle photonic band-structure calculations. Remarkably, photonic crystals with equifrequency Weyl points are qualitatively advantageous in applications including angular selectivity, frequency selectivity, invisibility cloaking, and three-dimensional imaging.
Occupational Gender Desegregation in the 1980s.
ERIC Educational Resources Information Center
Cotter, David A.; And Others
1995-01-01
Analysis of 1980 and 1990 Public Use Microdata Samples showed that, among full-time workers, occupational sex segregation declined 6.5 percentage points, less than the 8.5 point decline in the 1970s. Three-quarters of the desegregation was due to changed gender composition of occupations, one-quarter due to faster growth in more integrated…
Lab-on-a-chip nucleic-acid analysis towards point-of-care applications
NASA Astrophysics Data System (ADS)
Kopparthy, Varun Lingaiah
Recent infectious disease outbreaks, such as Ebola in 2013, highlight the need for fast and accurate diagnostic tools to combat the global spread of the disease. Detection and identification of the disease-causing viruses and bacteria at the genetic level is required for accurate diagnosis of the disease. Nucleic acid analysis systems have shown promise in identifying diseases such as HIV, anthrax, and Ebola in the past. Conventional nucleic acid analysis systems are still time consuming, and are not suitable for point-ofcare applications. Miniaturized nucleic acid systems has shown great promise for rapid analysis, but they have not been commercialized due to several factors such as footprint, complexity, portability, and power consumption. This dissertation presents the development of technologies and methods for a labon-a-chip nucleic acid analysis towards point-of-care applications. An oscillatory-flow PCR methodology in a thermal gradient is developed which provides real-time analysis of nucleic-acid samples. Oscillating flow PCR was performed in the microfluidic device under thermal gradient in 40 minutes. Reverse transcription PCR (RT-PCR) was achieved in the system without an additional heating element for incubation to perform reverse transcription step. A novel method is developed for the simultaneous pattering and bonding of all-glass microfluidic devices in a microwave oven. Glass microfluidic devices were fabricated in less than 4 minutes. Towards an integrated system for the detection of amplified products, a thermal sensing method is studied for the optimization of the sensor output. Calorimetric sensing method is characterized to identify design considerations and optimal parameters such as placement of the sensor, steady state response, and flow velocity for improved performance. An understanding of these developed technologies and methods will facilitate the development of lab-on-a-chip systems for point-of-care analysis.
Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-07-28
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation
NASA Astrophysics Data System (ADS)
Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.
2017-12-01
When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.
The relevance of time series in molecular ecology and conservation biology.
Habel, Jan C; Husemann, Martin; Finger, Aline; Danley, Patrick D; Zachos, Frank E
2014-05-01
The genetic structure of a species is shaped by the interaction of contemporary and historical factors. Analyses of individuals from the same population sampled at different points in time can help to disentangle the effects of current and historical forces and facilitate the understanding of the forces driving the differentiation of populations. The use of such time series allows for the exploration of changes at the population and intraspecific levels over time. Material from museum collections plays a key role in understanding and evaluating observed population structures, especially if large numbers of individuals have been sampled from the same locations at multiple time points. In these cases, changes in population structure can be assessed empirically. The development of new molecular markers relying on short DNA fragments (such as microsatellites or single nucleotide polymorphisms) allows for the analysis of long-preserved and partially degraded samples. Recently developed techniques to construct genome libraries with a reduced complexity and next generation sequencing and their associated analysis pipelines have the potential to facilitate marker development and genotyping in non-model species. In this review, we discuss the problems with sampling and available marker systems for historical specimens and demonstrate that temporal comparative studies are crucial for the estimation of important population genetic parameters and to measure empirically the effects of recent habitat alteration. While many of these analyses can be performed with samples taken at a single point in time, the measurements are more robust if multiple points in time are studied. Furthermore, examining the effects of habitat alteration, population declines, and population bottlenecks is only possible if samples before and after the respective events are included. © 2013 The Authors. Biological Reviews © 2013 Cambridge Philosophical Society.
Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Pega, Frank; Petrie, Dennis
2017-06-23
Previous studies suggest that poor psychosocial job quality is a risk factor for mental health problems, but they use conventional regression analytic methods that cannot rule out reverse causation, unmeasured time-invariant confounding and reporting bias. This study combines two quasi-experimental approaches to improve causal inference by better accounting for these biases: (i) linear fixed effects regression analysis and (ii) linear instrumental variable analysis. We extract 13 annual waves of national cohort data including 13 260 working-age (18-64 years) employees. The exposure variable is self-reported level of psychosocial job quality. The instruments used are two common workplace entitlements. The outcome variable is the Mental Health Inventory (MHI-5). We adjust for measured time-varying confounders. In the fixed effects regression analysis adjusted for time-varying confounders, a 1-point increase in psychosocial job quality is associated with a 1.28-point improvement in mental health on the MHI-5 scale (95% CI: 1.17, 1.40; P < 0.001). When the fixed effects was combined with the instrumental variable analysis, a 1-point increase psychosocial job quality is related to 1.62-point improvement on the MHI-5 scale (95% CI: -0.24, 3.48; P = 0.088). Our quasi-experimental results provide evidence to confirm job stressors as risk factors for mental ill health using methods that improve causal inference. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Wright, D. J.; Raad, M.; Hoel, E.; Park, M.; Mollenkopf, A.; Trujillo, R.
2016-12-01
Introduced is a new approach for processing spatiotemporal big data by leveraging distributed analytics and storage. A suite of temporally-aware analysis tools summarizes data nearby or within variable windows, aggregates points (e.g., for various sensor observations or vessel positions), reconstructs time-enabled points into tracks (e.g., for mapping and visualizing storm tracks), joins features (e.g., to find associations between features based on attributes, spatial relationships, temporal relationships or all three simultaneously), calculates point densities, finds hot spots (e.g., in species distributions), and creates space-time slices and cubes (e.g., in microweather applications with temperature, humidity, and pressure, or within human mobility studies). These "feature geo analytics" tools run in both batch and streaming spatial analysis mode as distributed computations across a cluster of servers on typical "big" data sets, where static data exist in traditional geospatial formats (e.g., shapefile) locally on a disk or file share, attached as static spatiotemporal big data stores, or streamed in near-real-time. In other words, the approach registers large datasets or data stores with ArcGIS Server, then distributes analysis across a cluster of machines for parallel processing. Several brief use cases will be highlighted based on a 16-node server cluster at 14 Gb RAM per node, allowing, for example, the buffering of over 8 million points or thousands of polygons in 1 minute. The approach is "hybrid" in that ArcGIS Server integrates open-source big data frameworks such as Apache Hadoop and Apache Spark on the cluster in order to run the analytics. In addition, the user may devise and connect custom open-source interfaces and tools developed in Python or Python Notebooks; the common denominator being the familiar REST API.
Multifractal analysis of the Korean agricultural market
NASA Astrophysics Data System (ADS)
Kim, Hongseok; Oh, Gabjin; Kim, Seunghwan
2011-11-01
We have studied the long-term memory effects of the Korean agricultural market using the detrended fluctuation analysis (DFA) method. In general, the return time series of various financial data, including stock indices, foreign exchange rates, and commodity prices, are uncorrelated in time, while the volatility time series are strongly correlated. However, we found that the return time series of Korean agricultural commodity prices are anti-correlated in time, while the volatility time series are correlated. The n-point correlations of time series were also examined, and it was found that a multifractal structure exists in Korean agricultural market prices.
Motivation Change in Therapeutic Community Residential Treatment
ERIC Educational Resources Information Center
Morgen, Keith; Kressel, David
2010-01-01
Latent growth curve analysis was used to assess motivation change across 3 time points for 120 therapeutic community residents. Models included the time-invariant predictor of readiness for treatment, which significantly predicted initial treatment motivation but not the rate of motivation change over time. (Contains 1 figure and 2 tables.)
Novel Analytic Methods Needed for Real-Time Continuous Core Body Temperature Data
Hertzberg, Vicki; Mac, Valerie; Elon, Lisa; Mutic, Nathan; Mutic, Abby; Peterman, Katherine; Tovar-Aguilar, J. Antonio; Economos, Jeannie; Flocks, Joan; McCauley, Linda
2017-01-01
Affordable measurement of core body temperature, Tc, in a continuous, real-time fashion is now possible. With this advance comes a new data analysis paradigm for occupational epidemiology. We characterize issues arising after obtaining Tc data over 188 workdays for 83 participating farmworkers, a population vulnerable to effects of rising temperatures due to climate change. We describe a novel approach to these data using smoothing and functional data analysis. This approach highlights different data aspects compared to describing Tc at a single time point or summaries of the time course into an indicator function (e.g., did Tc ever exceed 38°C, the threshold limit value for occupational heat exposure). Participants working in ferneries had significantly higher Tc at some point during the workday compared to those working in nurseries, despite a shorter workday for fernery participants. Our results typify the challenges and opportunities in analyzing big data streams from real-time physiologic monitoring. PMID:27756853
Novel Analytic Methods Needed for Real-Time Continuous Core Body Temperature Data.
Hertzberg, Vicki; Mac, Valerie; Elon, Lisa; Mutic, Nathan; Mutic, Abby; Peterman, Katherine; Tovar-Aguilar, J Antonio; Economos, Eugenia; Flocks, Joan; McCauley, Linda
2016-10-18
Affordable measurement of core body temperature (T c ) in a continuous, real-time fashion is now possible. With this advance comes a new data analysis paradigm for occupational epidemiology. We characterize issues arising after obtaining T c data over 188 workdays for 83 participating farmworkers, a population vulnerable to effects of rising temperatures due to climate change. We describe a novel approach to these data using smoothing and functional data analysis. This approach highlights different data aspects compared with describing T c at a single time point or summaries of the time course into an indicator function (e.g., did T c ever exceed 38 °C, the threshold limit value for occupational heat exposure). Participants working in ferneries had significantly higher T c at some point during the workday compared with those working in nurseries, despite a shorter workday for fernery participants. Our results typify the challenges and opportunities in analyzing big data streams from real-time physiologic monitoring. © The Author(s) 2016.
Fast rockfall hazard assessment along a road section using the new LYNX Mobile Mapper Lidar
NASA Astrophysics Data System (ADS)
Dario, Carrea; Celine, Longchamp; Michel, Jaboyedoff; Marc, Choffet; Marc-Henri, Derron; Clement, Michoud; Andrea, Pedrazzini; Dario, Conforti; Michael, Leslar; William, Tompkinson
2010-05-01
The terrestrial laser scanning (TLS) is an active remote sensing technique providing high resolution point clouds of the topography. The high resolution digital elevations models (HRDEM) derived of these point clouds are an important tool for the stability analysis of slopes. The LYNX Mobile Mapper is a new TLS generation developed by Optech. Its particularity is to be mounted on a vehicle and providing a 360° high density point cloud at 200-khz measurement rate in a very short acquisition time. It is composed of two sensors improving the resolution and reducing the laser shadowing. The spatial resolution is better than 10 cm at 10 m range and at a velocity of 50 km/h and the reflectivity of the signal is around 20% at a distance of 200 m. The Lidar is also equipped with a DGPS and an inertial measurement unit (IMU) which gives real time position and georeferences directly the point cloud. Thanks to its ability to provide a continuous data set from an extended area along a road, this TLS system is useful for rockfall hazard assessment. In addition, this new scanner decrease considerably the time spent in the field and the postprocessing is reduced thanks to resultant georeferenced data. Nevertheless, its application is limited to an area close to the road. The LYNX has been tested near Pontarlier (France) along roads sections affected by rockfall. Regarding to the tectonic context, the studied area is located in the Folded Jura mainly composed of limestone. The result is a very detailed point cloud with a point spacing of 4 cm. The LYNX presents detailed topography on which a structural analysis has been carried out using COLTOP-3D. It allows obtaining a full structural description along the road. In addition, kinematic tests coupled with probabilistic analysis give a susceptibility map of the road cut or natural cliffs above the road. Comparisons with field survey confirm the Lidar approach.
Memory persistency and nonlinearity in daily mean dew point across India
NASA Astrophysics Data System (ADS)
Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik; Bhattacharjee, Anup Kumar
2016-04-01
Enterprising endeavour has been taken in this work to realize and estimate the persistence in memory of the daily mean dew point time series obtained from seven different weather stations viz. Kolkata, Chennai (Madras), New Delhi, Mumbai (Bombay), Bhopal, Agartala and Ahmedabad representing different geographical zones in India. Hurst exponent values reveal an anti-persistent behaviour of these dew point series. To affirm the Hurst exponent values, five different scaling methods have been used and the corresponding results are compared to synthesize a finer and reliable conclusion out of it. The present analysis also bespeaks that the variation in daily mean dew point is governed by a non-stationary process with stationary increments. The delay vector variance (DVV) method has been exploited to investigate nonlinearity, and the present calculation confirms the presence of deterministic nonlinear profile in the daily mean dew point time series of the seven stations.
Chen, Xiaofeng; Song, Qiankun; Li, Zhongshan; Zhao, Zhenjiang; Liu, Yurong
2018-07-01
This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively. Furthermore, a uniform sufficient condition on the existence, uniqueness, and global asymptotical stability of the equilibrium point is obtained for both continuous-time QVNNs and their discrete-time version. Finally, two numerical examples are provided to substantiate the effectiveness of the proposed results.
NASA Astrophysics Data System (ADS)
Vu, Tinh Thi; Kiesel, Jens; Guse, Bjoern; Fohrer, Nicola
2017-04-01
The damming of rivers causes one of the most considerable impacts of our society on the riverine environment. More than 50% of the world's streams and rivers are currently impounded by dams before reaching the oceans. The construction of dams is of high importance in developing and emerging countries, i.e. for power generation and water storage. In the Vietnamese Vu Gia - Thu Bon Catchment (10,350 km2), about 23 dams were built during the last decades and store approximately 2,156 billion m3 of water. The water impoundment in 10 dams in upstream regions amounts to 17 % of the annual discharge volume. It is expected that impacts from these dams have altered the natural flow regime. However, up to now it is unclear how the flow regime was altered. For this, it needs to be investigated at what point in time these changes became significant and detectable. Many approaches exist to detect changes in stationary or consistency of hydrological records using statistical analysis of time series for the pre- and post-dam period. The objective of this study is to reliably detect and assess hydrologic shifts occurring in the discharge regime of an anthropogenically influenced river basin, mainly affected by the construction of dams. To achieve this, we applied nine available change-point tests to detect change in mean, variance and median on the daily and annual discharge records at two main gauges of the basin. The tests yield conflicting results: The majority of tests found abrupt changes that coincide with the damming-period, while others did not. To interpret how significant the changes in discharge regime are, and to which different properties of the time series each test responded, we calculated Indicators of Hydrologic Alteration (IHAs) for the time period before and after the detected change points. From the results, we can deduce, that the change point tests are influenced in different levels by different indicator groups (magnitude, duration, frequency, etc) and that within the indicator groups, some indicators are more sensitive than others. For instance, extreme low-flow, especially 7- and, 30-day minima and mean minimum low flow, as well as the variability of monthly flow are highly-sensitive to most detected change points. Our study clearly shows that, the detected change points depend on which test is chosen. For an objective assessment of change points, it is therefore necessary to explain the change points by calculating differences in IHAs. This analysis can be used to assess which change point method reacts to which type of hydrological change and, more importantly, it can be used to rank the change points according to their overall impact on the discharge regime. This leads to an improved evaluation of hydrologic change-points caused by anthropogenic impacts. Our study clearly shows that, the detected change points depend on which test is chosen. For an objective assessment of change points, it is therefore necessary to explain the change points by calculating differences in IHAs. This analysis can be used to assess which change point method reacts to which type of hydrological change and, more importantly, it can be used to rank the change points according to their overall impact on the discharge regime. This leads to an improved evaluation of hydrologic change-points caused by anthropogenic impacts.
Space-time measurements of oceanic sea states
NASA Astrophysics Data System (ADS)
Fedele, Francesco; Benetazzo, Alvise; Gallego, Guillermo; Shih, Ping-Chang; Yezzi, Anthony; Barbariol, Francesco; Ardhuin, Fabrice
2013-10-01
Stereo video techniques are effective for estimating the space-time wave dynamics over an area of the ocean. Indeed, a stereo camera view allows retrieval of both spatial and temporal data whose statistical content is richer than that of time series data retrieved from point wave probes. We present an application of the Wave Acquisition Stereo System (WASS) for the analysis of offshore video measurements of gravity waves in the Northern Adriatic Sea and near the southern seashore of the Crimean peninsula, in the Black Sea. We use classical epipolar techniques to reconstruct the sea surface from the stereo pairs sequentially in time, viz. a sequence of spatial snapshots. We also present a variational approach that exploits the entire data image set providing a global space-time imaging of the sea surface, viz. simultaneous reconstruction of several spatial snapshots of the surface in order to guarantee continuity of the sea surface both in space and time. Analysis of the WASS measurements show that the sea surface can be accurately estimated in space and time together, yielding associated directional spectra and wave statistics at a point in time that agrees well with probabilistic models. In particular, WASS stereo imaging is able to capture typical features of the wave surface, especially the crest-to-trough asymmetry due to second order nonlinearities, and the observed shape of large waves are fairly described by theoretical models based on the theory of quasi-determinism (Boccotti, 2000). Further, we investigate space-time extremes of the observed stationary sea states, viz. the largest surface wave heights expected over a given area during the sea state duration. The WASS analysis provides the first experimental proof that a space-time extreme is generally larger than that observed in time via point measurements, in agreement with the predictions based on stochastic theories for global maxima of Gaussian fields.
Integration of ERS and ASAR Time Series for Differential Interferometric SAR Analysis
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmüller, U.; Strozzi, T.; Wiesmann, A.
2005-12-01
Time series SAR interferometric analysis requires SAR data with good temporal sampling covering the time period of interest. The ERS satellites operated by ESA have acquired a large global archive of C-Band SAR data since 1991. The ASAR C-Band instrument aboard the ENVISAT platform launched in 2002 operates in the same orbit as ERS-1 and ERS-2 and has largely replaced the remaining operational ERS-2 satellite. However, interferometry between data acquired by ERS and ASAR is complicated by a 31 MHz offset in the radar center frequency between the instruments leading to decorrelation over distributed targets. Only in rare instances, when the baseline exceeds 1 km, can the spectral shift compensate for the difference in the frequencies of the SAR instruments to produce visible fringes. Conversely, point targets do not decorrelate due to the frequency offset making it possible to incorporate the ERS-ASAR phase information and obtain improved temporal coverage. We present an algorithm for interferometric point target analysis that integrates ERS-ERS, ASAR-ASAR and ERS-ASAR data. Initial analysis using the ERS-ERS data is used to identify the phase stable point-like scatterers within the scene. Height corrections relative to the initial DEM are derived by regression of the residual interferometric phases with respect to perpendicular baseline for a set of ERS-ERS interferograms. The ASAR images are coregistered with the ERS scenes and the point phase values are extracted. The different system pixel spacing values between ERS and ASAR requires additional refinement in the offset estimation and resampling procedure. Calculation of the ERS-ASAR simulated phase used to derive the differential interferometric phase must take into account the slightly different carrrier frequencies. Differential ERS-ASAR point phases contain an additional phase component related to the scatterer location within the resolution element. This additional phase varies over several cycles making the differential interferogram appear as uniform phase noise. We present how this point phase difference can be determined and used to correct the ERS-ASAR interferograms. Further processing proceeds as with standard ERS-ERS interferogram stacks utilizing the unwrapped point phases to obtain estimates of the deformation history, and path delay due to variations in tropospheric water vapor. We show and discuss examples demonstrating the success of this approach.
Position and volume estimation of atmospheric nuclear detonations from video reconstruction
NASA Astrophysics Data System (ADS)
Schmitt, Daniel T.
Recent work in digitizing films of foundational atmospheric nuclear detonations from the 1950s provides an opportunity to perform deeper analysis on these historical tests. This work leverages multi-view geometry and computer vision techniques to provide an automated means to perform three-dimensional analysis of the blasts for several points in time. The accomplishment of this requires careful alignment of the films in time, detection of features in the images, matching of features, and multi-view reconstruction. Sub-explosion features can be detected with a 67% hit rate and 22% false alarm rate. Hotspot features can be detected with a 71.95% hit rate, 86.03% precision and a 0.015% false positive rate. Detected hotspots are matched across 57-109 degree viewpoints with 76.63% average correct matching by defining their location relative to the center of the explosion, rotating them to the alternative viewpoint, and matching them collectively. When 3D reconstruction is applied to the hotspot matching it completes an automated process that has been used to create 168 3D point clouds with 31.6 points per reconstruction with each point having an accuracy of 0.62 meters with 0.35, 0.24, and 0.34 meters of accuracy in the x-, y- and z-direction respectively. As a demonstration of using the point clouds for analysis, volumes are estimated and shown to be consistent with radius-based models and in some cases improve on the level of uncertainty in the yield calculation.
Autocorrelation and cross-correlation in time series of homicide and attempted homicide
NASA Astrophysics Data System (ADS)
Machado Filho, A.; da Silva, M. F.; Zebende, G. F.
2014-04-01
We propose in this paper to establish the relationship between homicides and attempted homicides by a non-stationary time-series analysis. This analysis will be carried out by Detrended Fluctuation Analysis (DFA), Detrended Cross-Correlation Analysis (DCCA), and DCCA cross-correlation coefficient, ρ(n). Through this analysis we can identify a positive cross-correlation between homicides and attempted homicides. At the same time, looked at from the point of view of autocorrelation (DFA), this analysis can be more informative depending on time scale. For short scale (days), we cannot identify auto-correlations, on the scale of weeks DFA presents anti-persistent behavior, and for long time scales (n>90 days) DFA presents a persistent behavior. Finally, the application of this new type of statistical analysis proved to be efficient and, in this sense, this paper can contribute to a more accurate descriptive statistics of crime.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-08
... series in the pilot: (1) A time series analysis of open interest; and (2) An analysis of the distribution... times the number of shares outstanding. These are summed for all 500 stocks and divided by a... below $3.00 and $0.10 for all other series. Strike price intervals would be set no less than 5 points...
Structural Analysis of Single-Point Mutations Given an RNA Sequence: A Case Study with RNAMute
NASA Astrophysics Data System (ADS)
Churkin, Alexander; Barash, Danny
2006-12-01
We introduce here for the first time the RNAMute package, a pattern-recognition-based utility to perform mutational analysis and detect vulnerable spots within an RNA sequence that affect structure. Mutations in these spots may lead to a structural change that directly relates to a change in functionality. Previously, the concept was tried on RNA genetic control elements called "riboswitches" and other known RNA switches, without an organized utility that analyzes all single-point mutations and can be further expanded. The RNAMute package allows a comprehensive categorization, given an RNA sequence that has functional relevance, by exploring the patterns of all single-point mutants. For illustration, we apply the RNAMute package on an RNA transcript for which individual point mutations were shown experimentally to inactivate spectinomycin resistance in Escherichia coli. Functional analysis of mutations on this case study was performed experimentally by creating a library of point mutations using PCR and screening to locate those mutations. With the availability of RNAMute, preanalysis can be performed computationally before conducting an experiment.
Time-frequency analysis of backscattered signals from diffuse radar targets
NASA Astrophysics Data System (ADS)
Kenny, O. P.; Boashash, B.
1993-06-01
The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. The authors discuss time-frequency representation of the backscattered signal from a diffuse radar target. It is then shown that for point scatterers which are statistically dependent or for which the reflectivity coefficient has a nonzero mean value, reconstruction using time of flight positron emission tomography on time-frequency images is effective for estimating the scattering function of the target.
Hossner, Ernst-Joachim; Ehrlenspiel, Felix
2010-01-01
The paralysis-by-analysis phenomenon, i.e., attending to the execution of one's movement impairs performance, has gathered a lot of attention over recent years (see Wulf, 2007, for a review). Explanations of this phenomenon, e.g., the hypotheses of constrained action (Wulf et al., 2001) or of step-by-step execution (Masters, 1992; Beilock et al., 2002), however, do not refer to the level of underlying mechanisms on the level of sensorimotor control. For this purpose, a “nodal-point hypothesis” is presented here with the core assumption that skilled motor behavior is internally based on sensorimotor chains of nodal points, that attending to intermediate nodal points leads to a muscular re-freezing of the motor system at exactly and exclusively these points in time, and that this re-freezing is accompanied by the disruption of compensatory processes, resulting in an overall decrease of motor performance. Two experiments, on lever sequencing and basketball free throws, respectively, are reported that successfully tested these time-referenced predictions, i.e., showing that muscular activity is selectively increased and compensatory variability selectively decreased at movement-related nodal points if these points are in the focus of attention. PMID:21833285
Liu, Mei-bing; Chen, Xing-wei; Chen, Ying
2015-07-01
Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.
Influence of boiling point range of feedstock on properties of derived mesophase pitch
NASA Astrophysics Data System (ADS)
Yu, Ran; Liu, Dong; Lou, Bin; Chen, Qingtai; Zhang, Yadong; Li, Zhiheng
2018-06-01
The composition of raw material was optimized by vacuum distillation. The carbonization behavior of two kinds of raw material was followed by polarizing microscope, softening point, carbon yield and solubility. Two kinds of mesophase pitch have been monitored by X-ray diffraction (XRD), Fourier transform infrared spectrometer (FTIR), elemental analysis and 1H nuclear magnetic resonance (1H-NMR). The analysis results suggested that raw material B (15wt% of A was distillated out and the residue named B) could form large domain mesophase pitch earlier. The shortened heat treat time favored the retaining of alkyl group in mesophase pitch and reduced the softening point of masophase pitch.
Moire technique utilization for detection and measurement of scoliosis
NASA Astrophysics Data System (ADS)
Zawieska, Dorota; Podlasiak, Piotr
1993-02-01
Moire projection method enables non-contact measurement of the shape or deformation of different surfaces and constructions by fringe pattern analysis. The fringe map acquisition of the whole surface of the object under test is one of the main advantages compared with 'point by point' methods. The computer analyzes the shape of the whole surface and next user can selected different points or cross section of the object map. In this paper a few typical examples of an application of the moire technique in solving different medical problems will be presented. We will also present to you the equipment the moire pattern analysis is done in real time using the phase stepping method with CCD camera.
Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li
2018-01-01
Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.
Micro-Computed Tomography Evaluation of Human Fat Grafts in Nude Mice
Chung, Michael T.; Hyun, Jeong S.; Lo, David D.; Montoro, Daniel T.; Hasegawa, Masakazu; Levi, Benjamin; Januszyk, Michael; Longaker, Michael T.
2013-01-01
Background Although autologous fat grafting has revolutionized the field of soft tissue reconstruction and augmentation, long-term maintenance of fat grafts is unpredictable. Recent studies have reported survival rates of fat grafts to vary anywhere between 10% and 80% over time. The present study evaluated the long-term viability of human fat grafts in a murine model using a novel imaging technique allowing for in vivo volumetric analysis. Methods Human fat grafts were prepared from lipoaspirate samples using the Coleman technique. Fat was injected subcutaneously into the scalp of 10 adult Crl:NU-Foxn1nu CD-1 male mice. Micro-computed tomography (CT) was performed immediately following injection and then weekly thereafter. Fat volume was rendered by reconstructing a three-dimensional (3D) surface through cubic-spline interpolation. Specimens were also harvested at various time points and sections were prepared and stained with hematoxylin and eosin (H&E), for macrophages using CD68 and for the cannabinoid receptor 1 (CB1). Finally, samples were explanted at 8- and 12-week time points to validate calculated micro-CT volumes. Results Weekly CT scanning demonstrated progressive volume loss over the time course. However, volumetric analysis at the 8- and 12-week time points stabilized, showing an average of 62.2% and 60.9% survival, respectively. Gross analysis showed the fat graft to be healthy and vascularized. H&E analysis and staining for CD68 showed minimal inflammatory reaction with viable adipocytes. Immunohistochemical staining with anti-human CB1 antibodies confirmed human origin of the adipocytes. Conclusions Studies assessing the fate of autologous fat grafts in animals have focused on nonimaging modalities, including histological and biochemical analyses, which require euthanasia of the animals. In this study, we have demonstrated the ability to employ micro-CT for 3D reconstruction and volumetric analysis of human fat grafts in a mouse model. Importantly, this model provides a platform for subsequent study of fat manipulation and soft tissue engineering. PMID:22916732
[Visual field progression in glaucoma: cluster analysis].
Bresson-Dumont, H; Hatton, J; Foucher, J; Fonteneau, M
2012-11-01
Visual field progression analysis is one of the key points in glaucoma monitoring, but distinction between true progression and random fluctuation is sometimes difficult. There are several different algorithms but no real consensus for detecting visual field progression. The trend analysis of global indices (MD, sLV) may miss localized deficits or be affected by media opacities. Conversely, point-by-point analysis makes progression difficult to differentiate from physiological variability, particularly when the sensitivity of a point is already low. The goal of our study was to analyse visual field progression with the EyeSuite™ Octopus Perimetry Clusters algorithm in patients with no significant changes in global indices or worsening of the analysis of pointwise linear regression. We analyzed the visual fields of 162 eyes (100 patients - 58 women, 42 men, average age 66.8 ± 10.91) with ocular hypertension or glaucoma. For inclusion, at least six reliable visual fields per eye were required, and the trend analysis (EyeSuite™ Perimetry) of visual field global indices (MD and SLV), could show no significant progression. The analysis of changes in cluster mode was then performed. In a second step, eyes with statistically significant worsening of at least one of their clusters were analyzed point-by-point with the Octopus Field Analysis (OFA). Fifty four eyes (33.33%) had a significant worsening in some clusters, while their global indices remained stable over time. In this group of patients, more advanced glaucoma was present than in stable group (MD 6.41 dB vs. 2.87); 64.82% (35/54) of those eyes in which the clusters progressed, however, had no statistically significant change in the trend analysis by pointwise linear regression. Most software algorithms for analyzing visual field progression are essentially trend analyses of global indices, or point-by-point linear regression. This study shows the potential role of analysis by clusters trend. However, for best results, it is preferable to compare the analyses of several tests in combination with morphologic exam. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Thermal pretreatment of a high lignin SSF digester residue to increase its softening point
Howe, Daniel; Garcia-Perez, Manuel; Taasevigen, Danny; ...
2016-03-24
Residues high in lignin and ash generated from the simultaneous saccharification and fermentation of corn stover were thermally pretreated in an inert (N 2) atmosphere to study the effect of time and temperature on their softening points. These residues are difficult to feed into gasifiers due to premature thermal degradation and formation of reactive liquids in the feed lines, leading to plugging. The untreated and treated residues were characterized by proximate and ultimate analysis, and then analyzed via TGA, DSC, 13C NMR, Py-GC–MS, CHNO/S, and TMA. Interpretation of the compositional analysis indicates that the weight loss observed during pretreatment ismore » mainly due to the thermal decomposition and volatilization of the hemicelluloses and amorphous cellulose fractions. Fixed carbon increases in the pretreated material, mostly due to a concentration effect rather than the formation of new extra poly-aromatic material. The optimal processing time and temperature to minimize the production of carbonyl groups in the pretreated samples was 300 °C at a time of 30 min. Results showed that the softening point of the material could be increased from 187 °C to 250 °C, and that under the experimental conditions studied, pretreatment temperature plays a more important role than time. The increase in softening point was mainly due to the formation of covalent bonds in the lignin structures and the removal of low molecular weight volatile intermediates.« less
[Local fractal analysis of noise-like time series by all permutations method for 1-115 min periods].
Panchelyuga, V A; Panchelyuga, M S
2015-01-01
Results of local fractal analysis of 329-per-day time series of 239Pu alpha-decay rate fluctuations by means of all permutations method (APM) are presented. The APM-analysis reveals in the time series some steady frequency set. The coincidence of the frequency set with the Earth natural oscillations was demonstrated. A short review of works by different authors who analyzed the time series of fluctuations in processes of different nature is given. We have shown that the periods observed in those works correspond to the periods revealed in our study. It points to a common mechanism of the phenomenon observed.
A new method for mapping multidimensional data to lower dimensions
NASA Technical Reports Server (NTRS)
Gowda, K. C.
1983-01-01
A multispectral mapping method is proposed which is based on the new concept of BEND (Bidimensional Effective Normalised Difference). The method, which involves taking one sample point at a time and finding the interrelationships between its features, is found very economical from the point of view of storage and processing time. It has good dimensionality reduction and clustering properties, and is highly suitable for computer analysis of large amounts of data. The transformed values obtained by this procedure are suitable for either a planar 2-space mapping of geological sample points or for making grayscale and color images of geo-terrains. A few examples are given to justify the efficacy of the proposed procedure.
Osterndorff-Kahanek, Elizabeth A.; Becker, Howard C.; Lopez, Marcelo F.; Farris, Sean P.; Tiwari, Gayatri R.; Nunez, Yury O.; Harris, R. Adron; Mayfield, R. Dayne
2015-01-01
Repeated ethanol exposure and withdrawal in mice increases voluntary drinking and represents an animal model of physical dependence. We examined time- and brain region-dependent changes in gene coexpression networks in amygdala (AMY), nucleus accumbens (NAC), prefrontal cortex (PFC), and liver after four weekly cycles of chronic intermittent ethanol (CIE) vapor exposure in C57BL/6J mice. Microarrays were used to compare gene expression profiles at 0-, 8-, and 120-hours following the last ethanol exposure. Each brain region exhibited a large number of differentially expressed genes (2,000-3,000) at the 0- and 8-hour time points, but fewer changes were detected at the 120-hour time point (400-600). Within each region, there was little gene overlap across time (~20%). All brain regions were significantly enriched with differentially expressed immune-related genes at the 8-hour time point. Weighted gene correlation network analysis identified modules that were highly enriched with differentially expressed genes at the 0- and 8-hour time points with virtually no enrichment at 120 hours. Modules enriched for both ethanol-responsive and cell-specific genes were identified in each brain region. These results indicate that chronic alcohol exposure causes global ‘rewiring‘ of coexpression systems involving glial and immune signaling as well as neuronal genes. PMID:25803291
Piecewise multivariate modelling of sequential metabolic profiling data.
Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan
2008-02-19
Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.
Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V
2003-12-15
Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.
The detection and analysis of point processes in biological signals
NASA Technical Reports Server (NTRS)
Anderson, D. J.; Correia, M. J.
1977-01-01
A pragmatic approach to the detection and analysis of discrete events in biomedical signals is taken. Examples from both clinical and basic research are provided. Introductory sections discuss not only discrete events which are easily extracted from recordings by conventional threshold detectors but also events embedded in other information carrying signals. The primary considerations are factors governing event-time resolution and the effects limits to this resolution have on the subsequent analysis of the underlying process. The analysis portion describes tests for qualifying the records as stationary point processes and procedures for providing meaningful information about the biological signals under investigation. All of these procedures are designed to be implemented on laboratory computers of modest computational capacity.
Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW
NASA Technical Reports Server (NTRS)
DeLoach, RIchard; Philipsen, Iwan
2007-01-01
This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.
Schmidt, Mark E; Chiao, Ping; Klein, Gregory; Matthews, Dawn; Thurfjell, Lennart; Cole, Patricia E; Margolin, Richard; Landau, Susan; Foster, Norman L; Mason, N Scott; De Santi, Susan; Suhy, Joyce; Koeppe, Robert A; Jagust, William
2015-09-01
In vivo imaging of amyloid burden with positron emission tomography (PET) provides a means for studying the pathophysiology of Alzheimer's and related diseases. Measurement of subtle changes in amyloid burden requires quantitative analysis of image data. Reliable quantitative analysis of amyloid PET scans acquired at multiple sites and over time requires rigorous standardization of acquisition protocols, subject management, tracer administration, image quality control, and image processing and analysis methods. We review critical points in the acquisition and analysis of amyloid PET, identify ways in which technical factors can contribute to measurement variability, and suggest methods for mitigating these sources of noise. Improved quantitative accuracy could reduce the sample size necessary to detect intervention effects when amyloid PET is used as a treatment end point and allow more reliable interpretation of change in amyloid burden and its relationship to clinical course. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Ricker, Martin; Peña Ramírez, Víctor M.; von Rosen, Dietrich
2014-01-01
Growth curves are monotonically increasing functions that measure repeatedly the same subjects over time. The classical growth curve model in the statistical literature is the Generalized Multivariate Analysis of Variance (GMANOVA) model. In order to model the tree trunk radius (r) over time (t) of trees on different sites, GMANOVA is combined here with the adapted PL regression model Q = A·T+E, where for and for , A = initial relative growth to be estimated, , and E is an error term for each tree and time point. Furthermore, Ei[–b·r] = , , with TPR being the turning point radius in a sigmoid curve, and at is an estimated calibrating time-radius point. Advantages of the approach are that growth rates can be compared among growth curves with different turning point radiuses and different starting points, hidden outliers are easily detectable, the method is statistically robust, and heteroscedasticity of the residuals among time points is allowed. The model was implemented with dendrochronological data of 235 Pinus montezumae trees on ten Mexican volcano sites to calculate comparison intervals for the estimated initial relative growth . One site (at the Popocatépetl volcano) stood out, with being 3.9 times the value of the site with the slowest-growing trees. Calculating variance components for the initial relative growth, 34% of the growth variation was found among sites, 31% among trees, and 35% over time. Without the Popocatépetl site, the numbers changed to 7%, 42%, and 51%. Further explanation of differences in growth would need to focus on factors that vary within sites and over time. PMID:25402427
Tochigi, Mamoru; Usami, Satoshi; Matamura, Misato; Kitagawa, Yuko; Fukushima, Masako; Yonehara, Hiromi; Togo, Fumiharu; Nishida, Atsushi; Sasaki, Tsukasa
2016-01-01
To investigate the longitudinal relationship between sleep habits and mental health in adolescents. Multipoint observation data of up to five years were employed from a prospective cohort study of sleep habits and mental health status conducted from 2009 to 2013 in a unified junior and senior high school (grades 7-12) in Tokyo, Japan. A total of 1078 students answered a self-report questionnaire, including items on usual bed and wake-up times on school days, and the Japanese version of the 12-item General Health Questionnaire (GHQ-12). Latent growth model (LGM) analysis, which requires three or more time point data, showed that longitudinal changes in bedtime and GHQ-12 score (or score for depression/anxiety) were significantly and moderately correlated (correlation coefficient = 0.510, p < 0.05). Another result of interest was that, using an autoregressive cross-lagged (ARCL) model, bedtime and the depression/anxiety score had reciprocal effects the following year: ie, bedtime significantly affects the following year's depression/anxiety, and vice versa. In addition, the analysis provided estimates of mutually predicted changes: one-hour bedtime delay may worsen the GHQ-12 score by 0.2 points, and one-point worsening of the score may delay bedtime by 2.2 minutes. By using up to five multiple time point data, the present study confirms the correlational and reciprocally longitudinal relationship between bedtime delay and mental health status in Japanese adolescents. The results indicate that preventing late bedtime may have a significant effect on improving mental health in adolescents. Copyright © 2015 Elsevier B.V. All rights reserved.
Validation of accelerometer cut points in toddlers with and without cerebral palsy.
Oftedal, Stina; Bell, Kristie L; Davies, Peter S W; Ware, Robert S; Boyd, Roslyn N
2014-09-01
The purpose of this study was to validate uni- and triaxial ActiGraph cut points for sedentary time in toddlers with cerebral palsy (CP) and typically developing children (TDC). Children (n = 103, 61 boys, mean age = 2 yr, SD = 6 months, range = 1 yr 6 months-3 yr) were divided into calibration (n = 65) and validation (n = 38) samples with separate analyses for TDC (n = 28) and ambulant (Gross Motor Function Classification System I-III, n = 51) and nonambulant (Gross Motor Function Classification System IV-V, n = 25) children with CP. An ActiGraph was worn during a videotaped assessment. Behavior was coded as sedentary or nonsedentary. Receiver operating characteristic-area under the curve analysis determined the classification accuracy of accelerometer data. Predictive validity was determined using the Bland-Altman analysis. Classification accuracy for uniaxial data was fair for the ambulatory CP and TDC group but poor for the nonambulatory CP group. Triaxial data showed good classification accuracy for all groups. The uniaxial ambulatory CP and TDC cut points significantly overestimated sedentary time (bias = -10.5%, 95% limits of agreement [LoA] = -30.2% to 9.1%; bias = -17.3%, 95% LoA = -44.3% to 8.3%). The triaxial ambulatory and nonambulatory CP and TDC cut points provided accurate group-level measures of sedentary time (bias = -1.5%, 95% LoA = -20% to 16.8%; bias = 2.1%, 95% LoA = -17.3% to 21.5%; bias = -5.1%, 95% LoA = -27.5% to 16.1%). Triaxial accelerometers provide useful group-level measures of sedentary time in children with CP across the spectrum of functional abilities and TDC. Uniaxial cut points are not recommended.
Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher
2012-01-01
Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.
Mudaliar, Manikhandan; Tassi, Riccardo; Thomas, Funmilola C.; McNeilly, Tom N.; Weidt, Stefan K.; McLaughlin, Mark; Wilson, David; Burchmore, Richard; Herzyk, Pawel; Eckersall, P. David
2016-01-01
Mastitis, inflammation of the mammary gland, is the most common and costly disease of dairy cattle in the western world. It is primarily caused by bacteria, with Streptococcus uberis as one of the most prevalent causative agents. To characterize the proteome during Streptococcus uberis mastitis, an experimentally induced model of intramammary infection was used. Milk whey samples obtained from 6 cows at 6 time points were processed using label-free relative quantitative proteomics. This proteomic analysis complements clinical, bacteriological and immunological studies as well as peptidomic and metabolomic analysis of the same challenge model. A total of 2552 non-redundant bovine peptides were identified, and from these, 570 bovine proteins were quantified. Hierarchical cluster analysis and principal component analysis showed clear clustering of results by stage of infection, with similarities between pre-infection and resolution stages (0 and 312 h post challenge), early infection stages (36 and 42 h post challenge) and late infection stages (57 and 81 h post challenge). Ingenuity pathway analysis identified upregulation of acute phase protein pathways over the course of infection, with dominance of different acute phase proteins at different time points based on differential expression analysis. Antimicrobial peptides, notably cathelicidins and peptidoglycan recognition protein, were upregulated at all time points post challenge and peaked at 57 h, which coincided with 10 000-fold decrease in average bacterial counts. The integration of clinical, bacteriological, immunological and quantitative proteomics and other-omic data provides a more detailed systems level view of the host response to mastitis than has been achieved previously. PMID:27412694
NASA Astrophysics Data System (ADS)
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Horesh, Danny; Lowe, Sarah R; Galea, Sandro; Uddin, Monica; Koenen, Karestan C
2015-01-01
Posttraumatic stress disorder (PTSD) and depression are known to be highly comorbid. However, previous findings regarding the nature of this comorbidity have been inconclusive. This study prospectively examined whether PTSD and depression are distinct constructs in an epidemiologic sample, as well as assessed the directionality of the PTSD-depression association across time. Nine hundred and forty-two Detroit residents (males: n = 387; females: n = 555) were interviewed by phone at three time points, 1 year apart. At each time point, they were assessed for PTSD (using the PCL-C), depression (PHQ-9), trauma exposure, and stressful life events. First, a confirmatory factor analysis showed PTSD and depression to be two distinct factors at all three waves of assessments (W1, W2, and W3). Second, chi-square analysis detected significant differences between observed and expected rates of comorbidity at each time point, with significantly more no-disorder and comorbid cases, and significantly fewer PTSD only and depression only cases, than would be expected by chance alone. Finally, a cross-lagged analysis revealed a bidirectional association between PTSD and depression symptoms across time for the entire sample, as well as for women separately, wherein PTSD symptoms at an early wave predicted later depression symptoms, and vice versa. For men, however, only the paths from PTSD symptoms to subsequent depression symptoms were significant. Across time, PTSD and depression are distinct, but correlated, constructs among a highly-exposed epidemiologic sample. Women and men differ in both the risk of these conditions, and the nature of the long-term associations between them. © 2014 Wiley Periodicals, Inc.
Sastre, Salvador; Fernández Torija, Carlos; Carbonell, Gregoria; Rodríguez Martín, José Antonio; Beltrán, Eulalia María; González-Doncel, Miguel
2018-02-01
A diet fortified with 2,2', 4,4'-tetrabromodiphenyl ether (BDE-47: 0, 10, 100, and 1000 ng/g) was dosed to 4-7-day-old post-hatch medaka fish for 40 days to evaluate the effects on the swimming activity of fish using a miniaturized swimming flume. Chlorpyrifos (CF)-exposed fish were selected as the positive control to assess the validity and sensitivity of the behavioral findings. After 20 and 40 days of exposure, the locomotor activity was analyzed for 6 min in a flume section (arena). The CF positive control for each time point were fish exposed to 50 ng CF/ml for 48 h. Swimming patterns, presented as two-dimensional heat maps of fish movement and positioning, were obtained by geostatistical analyses. The heat maps of the control groups at time point 20 revealed visually comparable swimming patterns to those of the BDE-47-treated groups. For the comparative fish positioning analysis, both the arenas were divided into 15 proportional areas. No statistical differences were found between residence times in the areas from the control groups and those from the BDE-47-treated groups. At time point 40, the heat map overall patterns of the control groups differed visually from that of the 100-ng BDE-47/g-treated group, but a comparative analysis of the residence times in the corresponding 15 areas did not reveal consistent differences. The relative distances traveled by the control and treated groups at time points 20 and 40 were also comparable. The heat maps of CF-treated fish at both time points showed contrasting swim patterns with respect to those of the controls. These differential patterns were statistically supported with differences in the residence times for different areas. The relative distances traveled by the CF-treated fish were also significantly shorter. These results confirm the validity of the experimental design and indicate that a dietary BDE-47 exposure does not affect forced swimming in medaka at growing stages. Copyright © 2017 Elsevier Ltd. All rights reserved.
How should Fitts' Law be applied to human-computer interaction?
NASA Technical Reports Server (NTRS)
Gillan, D. J.; Holden, K.; Adam, S.; Rudisill, M.; Magee, L.
1992-01-01
The paper challenges the notion that any Fitts' Law model can be applied generally to human-computer interaction, and proposes instead that applying Fitts' Law requires knowledge of the users' sequence of movements, direction of movement, and typical movement amplitudes as well as target sizes. Two experiments examined a text selection task with sequences of controlled movements (point-click and point-drag). For the point-click sequence, a Fitts' Law model that used the diagonal across the text object in the direction of pointing (rather than the horizontal extent of the text object) as the target size provided the best fit for the pointing time data, whereas for the point-drag sequence, a Fitts' Law model that used the vertical size of the text object as the target size gave the best fit. Dragging times were fitted well by Fitts' Law models that used either the vertical or horizontal size of the terminal character in the text object. Additional results of note were that pointing in the point-click sequence was consistently faster than in the point-drag sequence, and that pointing in either sequence was consistently faster than dragging. The discussion centres around the need to define task characteristics before applying Fitts' Law to an interface design or analysis, analyses of pointing and of dragging, and implications for interface design.
Efficient Delaunay Tessellation through K-D Tree Decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less
Liu, Bao; Fan, Xiaoming; Huo, Shengnan; Zhou, Lili; Wang, Jun; Zhang, Hui; Hu, Mei; Zhu, Jianhua
2011-12-01
A method was established to analyse the overlapped chromatographic peaks based on the chromatographic-spectra data detected by the diode-array ultraviolet detector. In the method, the three-dimensional data were de-noised and normalized firstly; secondly the differences and clustering analysis of the spectra at different time points were calculated; then the purity of the whole chromatographic peak were analysed and the region were sought out in which the spectra of different time points were stable. The feature spectra were extracted from the spectrum-stable region as the basic foundation. The nonnegative least-square method was chosen to separate the overlapped peaks and get the flow curve which was based on the feature spectrum. The three-dimensional divided chromatographic-spectrum peak could be gained by the matrix operations of the feature spectra with the flow curve. The results displayed that this method could separate the overlapped peaks.
Zhang, Jiachao; Hu, Qisong; Xu, Chuanbiao; Liu, Sixin; Li, Congfa
2016-01-01
Pepper pericarp microbiota plays an important role in the pepper peeling process for the production of white pepper. We collected pepper samples at different peeling time points from Hainan Province, China, and used a metagenomic approach to identify changes in the pericarp microbiota based on functional gene analysis. UniFrac distance-based principal coordinates analysis revealed significant changes in the pericarp microbiota structure during peeling, which were attributed to increases in bacteria from the genera Selenomonas and Prevotella. We identified 28 core operational taxonomic units at each time point, mainly belonging to Selenomonas, Prevotella, Megasphaera, Anaerovibrio, and Clostridium genera. The results were confirmed by quantitative polymerase chain reaction. At the functional level, we observed significant increases in microbial features related to acetyl xylan esterase and pectinesterase for pericarp degradation during peeling. These findings offer a new insight into biodegradation for pepper peeling and will promote the development of the white pepper industry.
Key Microbiota Identification Using Functional Gene Analysis during Pepper (Piper nigrum L.) Peeling
Xu, Chuanbiao; Liu, Sixin; Li, Congfa
2016-01-01
Pepper pericarp microbiota plays an important role in the pepper peeling process for the production of white pepper. We collected pepper samples at different peeling time points from Hainan Province, China, and used a metagenomic approach to identify changes in the pericarp microbiota based on functional gene analysis. UniFrac distance-based principal coordinates analysis revealed significant changes in the pericarp microbiota structure during peeling, which were attributed to increases in bacteria from the genera Selenomonas and Prevotella. We identified 28 core operational taxonomic units at each time point, mainly belonging to Selenomonas, Prevotella, Megasphaera, Anaerovibrio, and Clostridium genera. The results were confirmed by quantitative polymerase chain reaction. At the functional level, we observed significant increases in microbial features related to acetyl xylan esterase and pectinesterase for pericarp degradation during peeling. These findings offer a new insight into biodegradation for pepper peeling and will promote the development of the white pepper industry. PMID:27768750
Third-Order Memristive Morris-Lecar Model of Barnacle Muscle Fiber
NASA Astrophysics Data System (ADS)
Rajamani, Vetriveeran; Sah, Maheshwar Pd.; Mannan, Zubaer Ibna; Kim, Hyongsuk; Chua, Leon
This paper presents a detailed analysis of various oscillatory behaviors observed in relation to the calcium and potassium ions in the third-order Morris-Lecar model of giant barnacle muscle fiber. Since, both the calcium and potassium ions exhibit all of the characteristics of memristor fingerprints, we claim that the time-varying calcium and potassium ions in the third-order Morris-Lecar model are actually time-invariant calcium and potassium memristors in the third-order memristive Morris-Lecar model. We confirmed the existence of a small unstable limit cycle oscillation in both the second-order and the third-order Morris-Lecar model by numerically calculating the basin of attraction of the asymptotically stable equilibrium point associated with two subcritical Hopf bifurcation points. We also describe a comprehensive analysis of the generation of oscillations in third-order memristive Morris-Lecar model via small-signal circuit analysis and a subcritical Hopf bifurcation phenomenon.
Sensitivity analysis of water consumption in an office building
NASA Astrophysics Data System (ADS)
Suchacek, Tomas; Tuhovcak, Ladislav; Rucka, Jan
2018-02-01
This article deals with sensitivity analysis of real water consumption in an office building. During a long-term real study, reducing of pressure in its water connection was simulated. A sensitivity analysis of uneven water demand was conducted during working time at various provided pressures and at various time step duration. Correlations between maximal coefficients of water demand variation during working time and provided pressure were suggested. The influence of provided pressure in the water connection on mean coefficients of water demand variation was pointed out, altogether for working hours of all days and separately for days with identical working hours.
MSL: A Measure to Evaluate Three-dimensional Patterns in Gene Expression Data
Gutiérrez-Avilés, David; Rubio-Escudero, Cristina
2015-01-01
Microarray technology is highly used in biological research environments due to its ability to monitor the RNA concentration levels. The analysis of the data generated represents a computational challenge due to the characteristics of these data. Clustering techniques are widely applied to create groups of genes that exhibit a similar behavior. Biclustering relaxes the constraints for grouping, allowing genes to be evaluated only under a subset of the conditions. Triclustering appears for the analysis of longitudinal experiments in which the genes are evaluated under certain conditions at several time points. These triclusters provide hidden information in the form of behavior patterns from temporal experiments with microarrays relating subsets of genes, experimental conditions, and time points. We present an evaluation measure for triclusters called Multi Slope Measure, based on the similarity among the angles of the slopes formed by each profile formed by the genes, conditions, and times of the tricluster. PMID:26124630
Identification of bearing faults using time domain zero-crossings
NASA Astrophysics Data System (ADS)
William, P. E.; Hoffman, M. W.
2011-11-01
In this paper, zero-crossing characteristic features are employed for early detection and identification of single point bearing defects in rotating machinery. As a result of bearing defects, characteristic defect frequencies appear in the machine vibration signal, normally requiring spectral analysis or envelope analysis to identify the defect type. Zero-crossing features are extracted directly from the time domain vibration signal using only the duration between successive zero-crossing intervals and do not require estimation of the rotational frequency. The features are a time domain representation of the composite vibration signature in the spectral domain. Features are normalized by the length of the observation window and classification is performed using a multilayer feedforward neural network. The model was evaluated on vibration data recorded using an accelerometer mounted on an induction motor housing subjected to a number of single point defects with different severity levels.
Lee, Tzu-Hsien
2005-12-01
This study examined the effects of operating a built-in touch-pad pointing device and a trackball mouse on participants' completion times, hand positions during operation, postural angles, and muscle activities. 8 young men were asked to perform a cursor travel task on a notebook computer using both 60- and 80-cm high table conditions. Analysis showed that the trackball mouse significantly decreased completion times. Participants selected a hand position farther from the table edge and larger elbow angle for the trackball mouse than for the built-in touch-pad pointing device. Participants' neck, thoracic, and arm angles, or splenius capitis, trapezius, deltoid, and erector spinae muscle activities were not significantly affected by the devices, but table height significantly affected participants' completion times, hand positions, and postural angles.
Lopes, M J
1997-01-01
This essay intends to discuss recent transformation both to hospital work and nursing work specifically. Analysis privilege inter and intra relations with multidisciplinary teams which is constituted of practices on the therapeutic process present in hospital space-time.
Advanced analysis of forest fire clustering
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail; Pereira, Mario; Golay, Jean
2017-04-01
Analysis of point pattern clustering is an important topic in spatial statistics and for many applications: biodiversity, epidemiology, natural hazards, geomarketing, etc. There are several fundamental approaches used to quantify spatial data clustering using topological, statistical and fractal measures. In the present research, the recently introduced multi-point Morisita index (mMI) is applied to study the spatial clustering of forest fires in Portugal. The data set consists of more than 30000 fire events covering the time period from 1975 to 2013. The distribution of forest fires is very complex and highly variable in space. mMI is a multi-point extension of the classical two-point Morisita index. In essence, mMI is estimated by covering the region under study by a grid and by computing how many times more likely it is that m points selected at random will be from the same grid cell than it would be in the case of a complete random Poisson process. By changing the number of grid cells (size of the grid cells), mMI characterizes the scaling properties of spatial clustering. From mMI, the data intrinsic dimension (fractal dimension) of the point distribution can be estimated as well. In this study, the mMI of forest fires is compared with the mMI of random patterns (RPs) generated within the validity domain defined as the forest area of Portugal. It turns out that the forest fires are highly clustered inside the validity domain in comparison with the RPs. Moreover, they demonstrate different scaling properties at different spatial scales. The results obtained from the mMI analysis are also compared with those of fractal measures of clustering - box counting and sand box counting approaches. REFERENCES Golay J., Kanevski M., Vega Orozco C., Leuenberger M., 2014: The multipoint Morisita index for the analysis of spatial patterns. Physica A, 406, 191-202. Golay J., Kanevski M. 2015: A new estimator of intrinsic dimension based on the multipoint Morisita index. Pattern Recognition, 48, 4070-4081.
Simulation of stochastic wind action on transmission power lines
NASA Astrophysics Data System (ADS)
Wielgos, Piotr; Lipecki, Tomasz; Flaga, Andrzej
2018-01-01
The paper presents FEM analysis of the wind action on overhead transmission power lines. The wind action is based on a stochastic simulation of the wind field in several points of the structure and on the wind tunnel tests on aerodynamic coefficients of the single conductor consisting of three wires. In FEM calculations the section of the transmission power line composed of three spans is considered. Non-linear analysis with deadweight of the structure is performed first to obtain the deformed shape of conductors. Next, time-dependent wind forces are applied to respective points of conductors and non-linear dynamic analysis is carried out.
Development of a short version of the modified Yale Preoperative Anxiety Scale.
Jenkins, Brooke N; Fortier, Michelle A; Kaplan, Sherrie H; Mayes, Linda C; Kain, Zeev N
2014-09-01
The modified Yale Preoperative Anxiety Scale (mYPAS) is the current "criterion standard" for assessing child anxiety during induction of anesthesia and has been used in >100 studies. This observational instrument covers 5 items and is typically administered at 4 perioperative time points. Application of this complex instrument in busy operating room (OR) settings, however, presents a challenge. In this investigation, we examined whether the instrument could be modified and made easier to use in OR settings. This study used qualitative methods, principal component analyses, Cronbach αs, and effect sizes to create the mYPAS-Short Form (mYPAS-SF) and reduce time points of assessment. Data were obtained from multiple patients (N = 3798; Mage = 5.63) who were recruited in previous investigations using the mYPAS over the past 15 years. After qualitative analysis, the "use of parent" item was eliminated due to content overlap with other items. The reduced item set accounted for 82% or more of the variance in child anxiety and produced the Cronbach α of at least 0.92. To reduce the number of time points of assessment, a minimum Cohen d effect size criterion of 0.48 change in mYPAS score across time points was used. This led to eliminating the walk to the OR and entrance to the OR time points. Reducing the mYPAS to 4 items, creating the mYPAS-SF that can be administered at 2 time points, retained the accuracy of the measure while allowing the instrument to be more easily used in clinical research settings.
Burnstein, Bryan D; Steele, Russell J; Shrier, Ian
2011-01-01
Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Cohort study. Eighteen different Cirque du Soleil shows. Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators.
Role of delay and screening in controlling AIDS
NASA Astrophysics Data System (ADS)
Chauhan, Sudipa; Bhatia, Sumit Kaur; Gupta, Surbhi
2016-06-01
We propose a non-linear HIV/ AIDS model to analyse the spread and control of HIV/AIDS. The population is divided into three classes, susceptible, infective and AIDS patients. The model is developed under the assumptions of vertical transmission and time delay in infective class. Time delay is also included to show sexual maturity period of infected newborns. We study dynamics of the model and obtain the reproduction number. Now to control the epidemic, we study the model where aware infective class is also added, i.e., people are made aware of their medical status by way of screening. To make the model more realistic, we consider the situation where aware infective class also interacts with other people. The model is analysed qualitatively by stability theory of ODE. Stability analysis of both disease-free and endemic equilibrium is studied based on reproduction number. Also, it is proved that if (R0)1, R1 ≤ 1 then, disease free equilibrium point is locally asymptotically stable and if (R0)1, R1 > 1 then, disease free equilibrium is unstable. Also, the stability analysis of endemic equilibrium point has been done and it is shown that for (R0)1 > 1 endemic equilibrium point is stable. Global stability analysis of endemic equilibrium point has also been done. At last, it is shown numerically that the delay in sexual maturity of infected individuals result in less number of AIDS patients.
Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-01-01
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978
Human/Automation Trade Methodology for the Moon, Mars and Beyond
NASA Technical Reports Server (NTRS)
Korsmeyer, David J.
2009-01-01
It is possible to create a consistent trade methodology that can characterize operations model alternatives for crewed exploration missions. For example, a trade-space that is organized around the objective of maximizing Crew Exploration Vehicle (CEV) independence would have the input as a classification of the category of analysis to be conducted or decision to be made, and a commitment to a detailed point in a mission profile during which the analysis or decision is to be made. For example, does the decision have to do with crew activity planning, or life support? Is the mission phase trans-Earth injection, cruise, or lunar descent? Different kinds of decision analysis of the trade-space between human and automated decisions will occurs at different points in a mission's profile. The necessary objectives at a given point in time during a mission will call for different kinds of response with respect to where and how computers and automation are expected to help provide an accurate, safe, and timely response. In this paper, a consistent methodology for assessing the trades between human and automated decisions on-board will be presented and various examples discussed.
Effect of hand paddles and parachute on butterfly coordination.
Telles, Thiago; Barroso, Renato; Barbosa, Augusto Carvalho; Salgueiro, Diego Fortes de Souza; Colantonio, Emilson; Andries Júnior, Orival
2015-01-01
This study investigated the effects of hand paddles, parachute and hand paddles plus parachute on the inter-limb coordination of butterfly swimming. Thirteen male swimmers were evaluated in four random maximal intensity conditions: without equipment, with hand paddles, with parachute and with hand paddles + parachute. Arm and leg stroke phases were identified by 2D video analysis to calculate the total time gap (T1: time between hands' entry in the water and high break-even point of the first undulation; T2: time between the beginning of the hand's backward movement and low break-even point of the first undulation; T3: time between the hand's arrival in a vertical plane to the shoulders and high break-even point of the second undulation; T4: time between the hand's release from the water and low break-even point of the second undulation). The swimming velocity was reduced and T1, T2 and T3 increased in parachute and hand paddles + parachute. No changes were observed in T4. Total time gap decreased in parachute and hand paddles + parachute. It is concluded that hand paddles do not influence the arm-to-leg coordination in butterfly, while parachute and hand paddles + parachute do change it, providing a greater propulsive continuity.
García Vicente, A M; Soriano Castrejón, A; Cruz Mora, M Á; Ortega Ruiperez, C; Espinosa Aunión, R; León Martín, A; González Ageitos, A; Van Gómez López, O
2014-01-01
To assess dual time point 2-deoxy-2-[(18)F]fluoro-D-glucose (18)(F)FDG PET-CT accuracy in nodal staging and in detection of extra-axillary involvement. Dual time point [(18)F] FDG PET/CT scan was performed in 75 patients. Visual and semiquantitative assessment of lymph nodes was performed. Semiquantitative measurement of SUV and ROC-analysis were carried out to calculate SUV(max) cut-off value with the best diagnostic performance. Axillary and extra-axillary lymph node chains were evaluated. Sensitivity and specificity of visual assessment was 87.3% and 75%, respectively. SUV(max) values with the best sensitivity were 0.90 and 0.95 for early and delayed PET, respectively. SUV(max) values with the best specificity were 1.95 and 2.75, respectively. Extra-axillary lymph node involvement was detected in 26.7%. FDG PET/CT detected extra-axillary lymph node involvement in one-fourth of the patients. Semiquantitative lymph node analysis did not show any advantage over the visual evaluation. Copyright © 2013 Elsevier España, S.L. and SEMNIM. All rights reserved.
ICESAT Laser Altimeter Pointing, Ranging and Timing Calibration from Integrated Residual Analysis
NASA Technical Reports Server (NTRS)
Luthcke, Scott B.; Rowlands, D. D.; Carabajal, C. C.; Harding, D. H.; Bufton, J. L.; Williams, T. A.
2003-01-01
On January 12, 2003 the Ice, Cloud and land Elevation Satellite (ICESat) was successfully placed into orbit. The ICESat mission carries the Geoscience Laser Altimeter System (GLAS), which has a primary measurement of short-pulse laser- ranging to the Earth s surface at 1064nm wavelength at a rate of 40 pulses per second. The instrument has collected precise elevation measurements of the ice sheets, sea ice roughness and thickness, ocean and land surface elevations and surface reflectivity. The accurate geolocation of GLAS s surface returns, the spots from which the laser energy reflects on the Earth s surface, is a critical issue in the scientific application of these data. Pointing, ranging, timing and orbit errors must be compensated to accurately geolocate the laser altimeter surface returns. Towards this end, the laser range observations can be fully exploited in an integrated residual analysis to accurately calibrate these geolocation/instrument parameters. ICESat laser altimeter data have been simultaneously processed as direct altimetry from ocean sweeps along with dynamic crossovers in order to calibrate pointing, ranging and timing. The calibration methodology and current calibration results are discussed along with future efforts.
Bacci, Elizabeth D; Staniewska, Dorota; Coyne, Karin S; Boyer, Stacey; White, Leigh Ann; Zach, Neta; Cedarbaum, Jesse M
2016-01-01
Our objective was to examine dimensionality and item-level performance of the Amyotrophic Lateral Sclerosis Functional Rating Scale-Revised (ALSFRS-R) across time using classical and modern test theory approaches. Confirmatory factor analysis (CFA) and Item Response Theory (IRT) analyses were conducted using data from patients with amyotrophic lateral sclerosis (ALS) Pooled Resources Open-Access ALS Clinical Trials (PRO-ACT) database with complete ALSFRS-R data (n = 888) at three time-points (Time 0, Time 1 (6-months), Time 2 (1-year)). Results demonstrated that in this population of 888 patients, mean age was 54.6 years, 64.4% were male, and 93.7% were Caucasian. The CFA supported a 4* individual-domain structure (bulbar, gross motor, fine motor, and respiratory domains). IRT analysis within each domain revealed misfitting items and overlapping item response category thresholds at all time-points, particularly in the gross motor and respiratory domain items. Results indicate that many of the items of the ALSFRS-R may sub-optimally distinguish among varying levels of disability assessed by each domain, particularly in patients with less severe disability. Measure performance improved across time as patient disability severity increased. In conclusion, modifications to select ALSFRS-R items may improve the instrument's specificity to disability level and sensitivity to treatment effects.
Autoregressive modeling for the spectral analysis of oceanographic data
NASA Technical Reports Server (NTRS)
Gangopadhyay, Avijit; Cornillon, Peter; Jackson, Leland B.
1989-01-01
Over the last decade there has been a dramatic increase in the number and volume of data sets useful for oceanographic studies. Many of these data sets consist of long temporal or spatial series derived from satellites and large-scale oceanographic experiments. These data sets are, however, often 'gappy' in space, irregular in time, and always of finite length. The conventional Fourier transform (FT) approach to the spectral analysis is thus often inapplicable, or where applicable, it provides questionable results. Here, through comparative analysis with the FT for different oceanographic data sets, the possibilities offered by autoregressive (AR) modeling to perform spectral analysis of gappy, finite-length series, are discussed. The applications demonstrate that as the length of the time series becomes shorter, the resolving power of the AR approach as compared with that of the FT improves. For the longest data sets examined here, 98 points, the AR method performed only slightly better than the FT, but for the very short ones, 17 points, the AR method showed a dramatic improvement over the FT. The application of the AR method to a gappy time series, although a secondary concern of this manuscript, further underlines the value of this approach.
Jerosch-Herold, Christina
2003-06-01
A longitudinal dynamic cohort study was conducted on patients with median nerve injuries to evaluate the relative responsiveness of five sensibility tests: touch threshold using the WEST (monofilaments), static two-point discrimination, locognosia, a pick-up test and an object recognition test. Repeated assessments were performed starting at 6 months after surgery. In order to compare the relative responsiveness of each test, effect size and the standard response mean were calculated for sensibility changes occurring between 6 and 18 months after repair. Large effect sizes (>0.8) and standard response means (>0.8) were obtained for the WEST, locognosia, pick-up and object recognition tests. Two-point discrimination was hardly measurable at any time point and exhibited strong flooring effects. Further analysis of all time points was undertaken to assess the strength of the monotonic relationship between test scores and time elapsed since surgery. Comparison of monotonicity between the five tests indicated that the WEST performed best, whereas two-point discrimination performed worst. These results suggest that the monofilament test (WEST), locognosia test, Moberg pick-up test and tactile gnosis test capture sensibility changes over time well and should be considered for inclusion in the outcome assessment of patients with median nerve injury.
Donor Behavior and Voluntary Support for Higher Education Institutions.
ERIC Educational Resources Information Center
Leslie, Larry L.; Ramey, Garey
Voluntary support of higher education in America is investigated through regression analysis of institutional characteristics at two points in time. The assumption of donor rationality together with explicit consideration of interorganizational relationships offers a coherent framework for the analysis of voluntary support by the major…
Seeking a fingerprint: analysis of point processes in actigraphy recording
NASA Astrophysics Data System (ADS)
Gudowska-Nowak, Ewa; Ochab, Jeremi K.; Oleś, Katarzyna; Beldzik, Ewa; Chialvo, Dante R.; Domagalik, Aleksandra; Fąfrowicz, Magdalena; Marek, Tadeusz; Nowak, Maciej A.; Ogińska, Halszka; Szwed, Jerzy; Tyburczyk, Jacek
2016-05-01
Motor activity of humans displays complex temporal fluctuations which can be characterised by scale-invariant statistics, thus demonstrating that structure and fluctuations of such kinetics remain similar over a broad range of time scales. Previous studies on humans regularly deprived of sleep or suffering from sleep disorders predicted a change in the invariant scale parameters with respect to those for healthy subjects. In this study we investigate the signal patterns from actigraphy recordings by means of characteristic measures of fractional point processes. We analyse spontaneous locomotor activity of healthy individuals recorded during a week of regular sleep and a week of chronic partial sleep deprivation. Behavioural symptoms of lack of sleep can be evaluated by analysing statistics of duration times during active and resting states, and alteration of behavioural organisation can be assessed by analysis of power laws detected in the event count distribution, distribution of waiting times between consecutive movements and detrended fluctuation analysis of recorded time series. We claim that among different measures characterising complexity of the actigraphy recordings and their variations implied by chronic sleep distress, the exponents characterising slopes of survival functions in resting states are the most effective biomarkers distinguishing between healthy and sleep-deprived groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Spencer; Rodrigues, George, E-mail: george.rodrigues@lhsc.on.ca; Department of Epidemiology/Biostatistics, University of Western Ontario, London
2013-01-01
Purpose: To perform a rigorous technological assessment and statistical validation of a software technology for anatomic delineations of the prostate on MRI datasets. Methods and Materials: A 3-phase validation strategy was used. Phase I consisted of anatomic atlas building using 100 prostate cancer MRI data sets to provide training data sets for the segmentation algorithms. In phase II, 2 experts contoured 15 new MRI prostate cancer cases using 3 approaches (manual, N points, and region of interest). In phase III, 5 new physicians with variable MRI prostate contouring experience segmented the same 15 phase II datasets using 3 approaches: manual,more » N points with no editing, and full autosegmentation with user editing allowed. Statistical analyses for time and accuracy (using Dice similarity coefficient) endpoints used traditional descriptive statistics, analysis of variance, analysis of covariance, and pooled Student t test. Results: In phase I, average (SD) total and per slice contouring time for the 2 physicians was 228 (75), 17 (3.5), 209 (65), and 15 seconds (3.9), respectively. In phase II, statistically significant differences in physician contouring time were observed based on physician, type of contouring, and case sequence. The N points strategy resulted in superior segmentation accuracy when initial autosegmented contours were compared with final contours. In phase III, statistically significant differences in contouring time were observed based on physician, type of contouring, and case sequence again. The average relative timesaving for N points and autosegmentation were 49% and 27%, respectively, compared with manual contouring. The N points and autosegmentation strategies resulted in average Dice values of 0.89 and 0.88, respectively. Pre- and postedited autosegmented contours demonstrated a higher average Dice similarity coefficient of 0.94. Conclusion: The software provided robust contours with minimal editing required. Observed time savings were seen for all physicians irrespective of experience level and baseline manual contouring speed.« less
NASA Astrophysics Data System (ADS)
Liu, P.
2013-12-01
Quantitative analysis of the risk for reservoir real-time operation is a hard task owing to the difficulty of accurate description of inflow uncertainties. The ensemble-based hydrologic forecasts directly depict the inflows not only the marginal distributions but also their persistence via scenarios. This motivates us to analyze the reservoir real-time operating risk with ensemble-based hydrologic forecasts as inputs. A method is developed by using the forecast horizon point to divide the future time into two stages, the forecast lead-time and the unpredicted time. The risk within the forecast lead-time is computed based on counting the failure number of forecast scenarios, and the risk in the unpredicted time is estimated using reservoir routing with the design floods and the reservoir water levels of forecast horizon point. As a result, a two-stage risk analysis method is set up to quantify the entire flood risks by defining the ratio of the number of scenarios that excessive the critical value to the total number of scenarios. The China's Three Gorges Reservoir (TGR) is selected as a case study, where the parameter and precipitation uncertainties are implemented to produce ensemble-based hydrologic forecasts. The Bayesian inference, Markov Chain Monte Carlo, is used to account for the parameter uncertainty. Two reservoir operation schemes, the real operated and scenario optimization, are evaluated for the flood risks and hydropower profits analysis. With the 2010 flood, it is found that the improvement of the hydrologic forecast accuracy is unnecessary to decrease the reservoir real-time operation risk, and most risks are from the forecast lead-time. It is therefore valuable to decrease the avarice of ensemble-based hydrologic forecasts with less bias for a reservoir operational purpose.
Tinkelman, Igor; Melamed, Timor
2005-06-01
In Part I of this two-part investigation [J. Opt. Soc. Am. A 22, 1200 (2005)], we presented a theory for phase-space propagation of time-harmonic electromagnetic fields in an anisotropic medium characterized by a generic wave-number profile. In this Part II, these investigations are extended to transient fields, setting a general analytical framework for local analysis and modeling of radiation from time-dependent extended-source distributions. In this formulation the field is expressed as a superposition of pulsed-beam propagators that emanate from all space-time points in the source domain and in all directions. Using time-dependent quadratic-Lorentzian windows, we represent the field by a phase-space spectral distribution in which the propagating elements are pulsed beams, which are formulated by a transient plane-wave spectrum over the extended-source plane. By applying saddle-point asymptotics, we extract the beam phenomenology in the anisotropic environment resulting from short-pulsed processing. Finally, the general results are applied to the special case of uniaxial crystal and compared with a reference solution.
Brain MRI volumetry in a single patient with mild traumatic brain injury.
Ross, David E; Castelvecchi, Cody; Ochs, Alfred L
2013-01-01
This letter to the editor describes the case of a 42 year old man with mild traumatic brain injury and multiple neuropsychiatric symptoms which persisted for a few years after the injury. Initial CT scans and MRI scans of the brain showed no signs of atrophy. Brain volume was measured using NeuroQuant®, an FDA-approved, commercially available software method. Volumetric cross-sectional (one point in time) analysis also showed no atrophy. However, volumetric longitudinal (two points in time) analysis showed progressive atrophy in several brain regions. This case illustrated in a single patient the principle discovered in multiple previous group studies, namely that the longitudinal design is more powerful than the cross-sectional design for finding atrophy in patients with traumatic brain injury.
Multifractality and Network Analysis of Phase Transition
Li, Wei; Yang, Chunbin; Han, Jihui; Su, Zhu; Zou, Yijiang
2017-01-01
Many models and real complex systems possess critical thresholds at which the systems shift dramatically from one sate to another. The discovery of early-warnings in the vicinity of critical points are of great importance to estimate how far the systems are away from the critical states. Multifractal Detrended Fluctuation analysis (MF-DFA) and visibility graph method have been employed to investigate the multifractal and geometrical properties of the magnetization time series of the two-dimensional Ising model. Multifractality of the time series near the critical point has been uncovered from the generalized Hurst exponents and singularity spectrum. Both long-term correlation and broad probability density function are identified to be the sources of multifractality. Heterogeneous nature of the networks constructed from magnetization time series have validated the fractal properties. Evolution of the topological quantities of the visibility graph, along with the variation of multifractality, serve as new early-warnings of phase transition. Those methods and results may provide new insights about the analysis of phase transition problems and can be used as early-warnings for a variety of complex systems. PMID:28107414
A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.
1991-01-01
A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.
Kessel, Sarah; Cribbes, Scott; Bonasu, Surekha; Rice, William; Qiu, Jean; Chan, Leo Li-Ying
2017-09-01
The development of three-dimensional (3D) multicellular tumor spheroid models for cancer drug discovery research has increased in the recent years. The use of 3D tumor spheroid models may be more representative of the complex in vivo tumor microenvironments in comparison to two-dimensional (2D) assays. Currently, viability of 3D multicellular tumor spheroids has been commonly measured on standard plate-readers using metabolic reagents such as CellTiter-Glo® for end point analysis. Alternatively, high content image cytometers have been used to measure drug effects on spheroid size and viability. Previously, we have demonstrated a novel end point drug screening method for 3D multicellular tumor spheroids using the Celigo Image Cytometer. To better characterize the cancer drug effects, it is important to also measure the kinetic cytotoxic and apoptotic effects on 3D multicellular tumor spheroids. In this work, we demonstrate the use of PI and caspase 3/7 stains to measure viability and apoptosis for 3D multicellular tumor spheroids in real-time. The method was first validated by staining different types of tumor spheroids with PI and caspase 3/7 and monitoring the fluorescent intensities for 16 and 21 days. Next, PI-stained and nonstained control tumor spheroids were digested into single cell suspension to directly measure viability in a 2D assay to determine the potential toxicity of PI. Finally, extensive data analysis was performed on correlating the time-dependent PI and caspase 3/7 fluorescent intensities to the spheroid size and necrotic core formation to determine an optimal starting time point for cancer drug testing. The ability to measure real-time viability and apoptosis is highly important for developing a proper 3D model for screening tumor spheroids, which can allow researchers to determine time-dependent drug effects that usually are not captured by end point assays. This would improve the current tumor spheroid analysis method to potentially better identify more qualified cancer drug candidates for drug discovery research. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis
NASA Astrophysics Data System (ADS)
Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song
2018-01-01
To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2015-11-01
The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Human detection and motion analysis at security points
NASA Astrophysics Data System (ADS)
Ozer, I. Burak; Lv, Tiehan; Wolf, Wayne H.
2003-08-01
This paper presents a real-time video surveillance system for the recognition of specific human activities. Specifically, the proposed automatic motion analysis is used as an on-line alarm system to detect abnormal situations in a campus environment. A smart multi-camera system developed at Princeton University is extended for use in smart environments in which the camera detects the presence of multiple persons as well as their gestures and their interaction in real-time.
Choi, Youngshim; Hur, Cheol-Goo; Park, Taesun
2013-01-01
The pathophysiological mechanisms underlying the development of obesity and metabolic diseases are not well understood. To gain more insight into the genetic mediators associated with the onset and progression of diet-induced obesity and metabolic diseases, we studied the molecular changes in response to a high-fat diet (HFD) by using a mode-of-action by network identification (MNI) analysis. Oligo DNA microarray analysis was performed on visceral and subcutaneous adipose tissues and muscles of male C57BL/6N mice fed a normal diet or HFD for 2, 4, 8, and 12 weeks. Each of these data was queried against the MNI algorithm, and the lists of top 5 highly ranked genes and gene ontology (GO)-annotated pathways that were significantly overrepresented among the 100 highest ranked genes at each time point in the 3 different tissues of mice fed the HFD were considered in the present study. The 40 highest ranked genes identified by MNI analysis at each time point in the different tissues of mice with diet-induced obesity were subjected to clustering based on their temporal patterns. On the basis of the above-mentioned results, we investigated the sequential induction of distinct olfactory receptors and the stimulation of cancer-related genes during the development of obesity in both adipose tissues and muscles. The top 5 genes recognized using the MNI analysis at each time point and gene cluster identified based on their temporal patterns in the peripheral tissues of mice provided novel and often surprising insights into the potential genetic mediators for obesity progression.
van Emmerik, Arnold A P; Hamaker, Ellen L
2017-09-04
This study investigated whether Vincent van Gogh became increasingly self-focused-and thus vulnerable to depression-towards the end of his life, through a quantitative analysis of his written pronoun use over time. A change-point analysis was conducted on the time series formed by the pronoun use in Van Gogh's letters. We used time as a predictor to see whether there was evidence for increased self-focus towards the end of Van Gogh's life, and we compared this to the pattern in the letters written before his move to Arles. Specifically, we examined Van Gogh's use of first person singular pronouns (FPSP) and first person plural pronouns (FPPP) in the 415 letters he wrote while working as an artist before his move to Arles, and in the next 248 letters he wrote after his move to Arles until his death in Auvers-sur-Oise. During the latter period, Van Gogh's use of FPSP showed an annual increase of 0.68% ( SE = 0.15, p < 0.001) and his use of FPPP showed an annual decrease of 0.23% ( SE = 0.04, p < 0.001), indicating increasing self-focus and vulnerability to depression. This trend differed from Van Gogh's pronoun use in the former period (which showed no significant trend in FPSP, and an annual increase of FPPP of 0.03%, SE = 0.02, p = 0.04). This study suggests that Van Gogh's death was preceded by a gradually increasing self-focus and vulnerability to depression. It also illustrates how existing methods (i.e., quantitative linguistic analysis and change-point analysis) can be combined to study specific research questions in innovative ways.
van Emmerik, Arnold A. P.; Hamaker, Ellen L.
2017-01-01
This study investigated whether Vincent van Gogh became increasingly self-focused—and thus vulnerable to depression—towards the end of his life, through a quantitative analysis of his written pronoun use over time. A change-point analysis was conducted on the time series formed by the pronoun use in Van Gogh’s letters. We used time as a predictor to see whether there was evidence for increased self-focus towards the end of Van Gogh’s life, and we compared this to the pattern in the letters written before his move to Arles. Specifically, we examined Van Gogh’s use of first person singular pronouns (FPSP) and first person plural pronouns (FPPP) in the 415 letters he wrote while working as an artist before his move to Arles, and in the next 248 letters he wrote after his move to Arles until his death in Auvers-sur-Oise. During the latter period, Van Gogh’s use of FPSP showed an annual increase of 0.68% (SE = 0.15, p < 0.001) and his use of FPPP showed an annual decrease of 0.23% (SE = 0.04, p < 0.001), indicating increasing self-focus and vulnerability to depression. This trend differed from Van Gogh’s pronoun use in the former period (which showed no significant trend in FPSP, and an annual increase of FPPP of 0.03%, SE = 0.02, p = 0.04). This study suggests that Van Gogh’s death was preceded by a gradually increasing self-focus and vulnerability to depression. It also illustrates how existing methods (i.e., quantitative linguistic analysis and change-point analysis) can be combined to study specific research questions in innovative ways. PMID:28869542
NASA Astrophysics Data System (ADS)
Lodhi, Ehtisham; Lodhi, Zeeshan; Noman Shafqat, Rana; Chen, Fieda
2017-07-01
Photovoltaic (PV) system usually employed The Maximum power point tracking (MPPT) techniques for increasing its efficiency. The performance of the PV system perhaps boosts by controlling at its apex point of power, in this way maximal power can be given to load. The proficiency of a PV system usually depends upon irradiance, temperature and array architecture. PV array shows a non-linear style for V-I curve and maximal power point on V-P curve also varies with changing environmental conditions. MPPT methods grantees that a PV module is regulated at reference voltage and to produce entire usage of the maximal output power. This paper gives analysis between two widely employed Perturb and Observe (P&O) and Incremental Conductance (INC) MPPT techniques. Their performance is evaluated and compared through theoretical analysis and digital simulation on the basis of response time and efficiency under varying irradiance and temperature condition using Matlab/Simulink.
SGR 1822-1606: Constant Spin Period
NASA Astrophysics Data System (ADS)
Serim, M.; Baykal, A.; Inam, S. C.
2011-08-01
We have analyzed light curve of the new source SGR 1822-1606 (Cummings et al. GCN 12159) using the real time data of RXTE observations. We have extracted light curve for 11 pointings with a time span of about 20 days and employed pulse timing analysis using the harmonic representation of pulses. Using the cross correlation of harmonic representation of pulses, we have obtained pulse arrival times.
Urban Growth Detection Using Filtered Landsat Dense Time Trajectory in an Arid City
NASA Astrophysics Data System (ADS)
Ye, Z.; Schneider, A.
2014-12-01
Among all remote sensing environment monitoring techniques, time series analysis of biophysical index is drawing increasing attention. Although many of them studied forest disturbance and land cover change detection, few focused on urban growth mapping at medium spatial resolution. As Landsat archive becomes open accessible, methods using Landsat time-series imagery to detect urban growth is possible. It is found that a time trajectory from a newly developed urban area shows a dramatic drop of vegetation index. This enable the utilization of time trajectory analysis to distinguish impervious surface and crop land that has a different temporal biophysical pattern. Also, the time of change can be estimated, yet many challenges remain. Landsat data has lower temporal resolution, which may be worse when cloud-contaminated pixels and SLC-off effect exist. It is difficult to tease apart intra-annual, inter-annual, and land cover difference in a time series. Here, several methods of time trajectory analysis are utilized and compared to find a computationally efficient and accurate way on urban growth detection. A case study city, Ankara, Turkey is chosen for its arid climate and various landscape distributions. For preliminary research, Landsat TM and ETM+ scenes from 1998 to 2002 are chosen. NDVI, EVI, and SAVI are selected as research biophysical indices. The procedure starts with a seasonality filtering. Only areas with seasonality need to be filtered so as to decompose seasonality and extract overall trend. Harmonic transform, wavelet transform, and a pre-defined bell shape filter are used to estimate the overall trend in the time trajectory for each pixel. The point with significant drop in the trajectory is tagged as change point. After an urban change is detected, forward and backward checking is undertaken to make sure it is really new urban expansion other than short time crop fallow or forest disturbance. The method proposed here can capture most of the urban growth during research time period, although the accuracy of time point determination is a bit lower than this. Results from several biophysical indices and filtering methods are similar. Some fallows and bare lands in arid area are easily confused with urban impervious surface.
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
Reviving common standards in point-count surveys for broad inference across studies
Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.
2014-01-01
We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and analysis by regional and national data centers.
NASA Technical Reports Server (NTRS)
Sahai, Ranjana; Pierce, Larry; Cicolani, Luigi; Tischler, Mark
1998-01-01
Helicopter slung load operations are common in both military and civil contexts. The slung load adds load rigid body modes, sling stretching, and load aerodynamics to the system dynamics, which can degrade system stability and handling qualities, and reduce the operating envelope of the combined system below that of the helicopter alone. Further, the effects of the load on system dynamics vary significantly among the large range of loads, slings, and flight conditions that a utility helicopter will encounter in its operating life. In this context, military helicopters and loads are often qualified for slung load operations via flight tests which can be time consuming and expensive. One way to reduce the cost and time required to carry out these tests and generate quantitative data more readily is to provide an efficient method for analysis during the flight, so that numerous test points can be evaluated in a single flight test, with evaluations performed in near real time following each test point and prior to clearing the aircraft to the next point. Methodology for this was implemented at Ames and demonstrated in slung load flight tests in 1997 and was improved for additional flight tests in 1999. The parameters of interest for the slung load tests are aircraft handling qualities parameters (bandwidth and phase delay), stability margins (gain and phase margin), and load pendulum roots (damping and natural frequency). A procedure for the identification of these parameters from frequency sweep data was defined using the CIFER software package. CIFER is a comprehensive interactive package of utilities for frequency domain analysis previously developed at Ames for aeronautical flight test applications. It has been widely used in the US on a variety of aircraft, including some primitive flight time analysis applications.
Kayano, Mitsunori; Matsui, Hidetoshi; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru
2016-04-01
High-throughput time course expression profiles have been available in the last decade due to developments in measurement techniques and devices. Functional data analysis, which treats smoothed curves instead of originally observed discrete data, is effective for the time course expression profiles in terms of dimension reduction, robustness, and applicability to data measured at small and irregularly spaced time points. However, the statistical method of differential analysis for time course expression profiles has not been well established. We propose a functional logistic model based on elastic net regularization (F-Logistic) in order to identify the genes with dynamic alterations in case/control study. We employ a mixed model as a smoothing method to obtain functional data; then F-Logistic is applied to time course profiles measured at small and irregularly spaced time points. We evaluate the performance of F-Logistic in comparison with another functional data approach, i.e. functional ANOVA test (F-ANOVA), by applying the methods to real and synthetic time course data sets. The real data sets consist of the time course gene expression profiles for long-term effects of recombinant interferon β on disease progression in multiple sclerosis. F-Logistic distinguishes dynamic alterations, which cannot be found by competitive approaches such as F-ANOVA, in case/control study based on time course expression profiles. F-Logistic is effective for time-dependent biomarker detection, diagnosis, and therapy. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Stiletto, R; Röthke, M; Schäfer, E; Lefering, R; Waydhas, Ch
2006-10-01
Patient security has become one of the major aspects of clinical management in recent years. The crucial point in research was focused on malpractice. In contradiction to the economic process in non medical fields, the analysis of errors during the in-patient treatment time was neglected. Patient risk management can be defined as a structured procedure in a clinical unit with the aim to reduce harmful events. A risk point model was created based on a Delphi process and founded on the DIVI data register. The risk point model was evaluated in clinically working ICU departments participating in the register data base. The results of the risk point evaluation will be integrated in the next data base update. This might be a step to improve the reliability of the register to measure quality assessment in the ICU.
Hernandez, Stephen C; Sibley, Haley; Fink, Daniel S; Kunduk, Melda; Schexnaildre, Mell; Kakade, Anagha; McWhorter, Andrew J
2016-05-01
Micronized acellular dermis has been used for nearly 15 years to correct glottic insufficiency. With previous demonstration of safety and efficacy, this study aims to evaluate intermediate and long-term voice outcomes in those who underwent injection laryngoplasty for unilateral vocal fold paralysis. Technique and timing of injection were also reviewed to assess their impact on outcomes. Case series with chart review. Tertiary care center. Patients undergoing injection laryngoplasty from May 2007 to September 2012 were reviewed for possible inclusion. Pre- and postoperative Voice Handicap Index (VHI) scores, as well as senior speech-language pathologists' blinded assessment of voice, were collected for analysis. The final sample included patients who underwent injection laryngoplasty for unilateral vocal fold paralysis, 33 of whom had VHI results and 37 of whom had voice recordings. Additional data were obtained, including technique and timing of injection. Analysis was performed on those patients above with VHI and perceptual voice grades before and at least 6 months following injection. Mean VHI improved by 28.7 points at 6 to 12 months and 22.8 points at >12 months (P = .001). Mean perceptual voice grades improved by 17.6 points at 6 to 12 months and 16.3 points at >12 months (P < .001). No statistically significant difference was found with technique or time to injection. Micronized acellular dermis is a safe injectable that improved both patient-completed voice ratings and blinded reviewer voice gradings at intermediate and long-term follow-up. Further investigation may be warranted regarding technique and timing of injection. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.
Signs and stability in higher-derivative gravity
NASA Astrophysics Data System (ADS)
Narain, Gaurav
2018-02-01
Perturbatively renormalizable higher-derivative gravity in four space-time dimensions with arbitrary signs of couplings has been considered. Systematic analysis of the action with arbitrary signs of couplings in Lorentzian flat space-time for no-tachyons, fixes the signs. Feynman + i𝜖 prescription for these signs further grants necessary convergence in path-integral, suppressing the field modes with large action. This also leads to a sensible wick rotation where quantum computation can be performed. Running couplings for these sign of parameters make the massive tensor ghost innocuous leading to a stable and ghost-free renormalizable theory in four space-time dimensions. The theory has a transition point arising from renormalization group (RG) equations, where the coefficient of R2 diverges without affecting the perturbative quantum field theory (QFT). Redefining this coefficient gives a better handle over the theory around the transition point. The flow equations push the flow of parameters across the transition point. The flow beyond the transition point is analyzed using the one-loop RG equations which shows that the regime beyond the transition point has unphysical properties: there are tachyons, the path-integral loses positive definiteness, Newton’s constant G becomes negative and large, and perturbative parameters become large. These shortcomings indicate a lack of completeness beyond the transition point and need of a nonperturbative treatment of the theory beyond the transition point.
USDA-ARS?s Scientific Manuscript database
Identification of genes with differential transcript abundance (GDTA) in seedless mutants may enhance understanding of seedless citrus development. Transcriptome analysis was conducted at three time points during early fruit development (Phase 1) of three seedy citrus genotypes: Fallglo [Bower citru...
NASA Technical Reports Server (NTRS)
Lim, Sang G.; Brewe, David E.; Prahl, Joseph M.
1990-01-01
The transient analysis of hydrodynamic lubrication of a point-contact is presented. A body-fitted coordinate system is introduced to transform the physical domain to a rectangular computational domain, enabling the use of the Newton-Raphson method for determining pressures and locating the cavitation boundary, where the Reynolds boundary condition is specified. In order to obtain the transient solution, an explicit Euler method is used to effect a time march. The transient dynamic load is a sinusoidal function of time with frequency, fractional loading, and mean load as parameters. Results include the variation of the minimum film thickness and phase-lag with time as functions of excitation frequency. The results are compared with the analytic solution to the transient step bearing problem with the same dynamic loading function. The similarities of the results suggest an approximate model of the point contact minimum film thickness solution.
Quantification of topological changes of vorticity contours in two-dimensional Navier-Stokes flow.
Ohkitani, Koji; Al Sulti, Fayeza
2010-06-01
A characterization of reconnection of vorticity contours is made by direct numerical simulations of the two-dimensional Navier-Stokes flow at a relatively low Reynolds number. We identify all the critical points of the vorticity field and classify them by solving an eigenvalue problem of its Hessian matrix on the basis of critical-point theory. The numbers of hyperbolic (saddles) and elliptic (minima and maxima) points are confirmed to satisfy Euler's index theorem numerically. Time evolution of these indices is studied for a simple initial condition. Generally speaking, we have found that the indices are found to decrease in number with time. This result is discussed in connection with related works on streamline topology, in particular, the relationship between stagnation points and the dissipation. Associated elementary procedures in physical space, the merging of vortices, are studied in detail for a number of snapshots. A similar analysis is also done using the stream function.
Schulman, Gerald; Berl, Tomas; Beck, Gerald J; Remuzzi, Giuseppe; Ritz, Eberhard; Shimizu, Miho; Shobu, Yuko; Kikuchi, Mami
2016-09-30
The orally administered spherical carbon adsorbent AST-120 is used on-label in Asian countries to slow renal disease progression in patients with progressive chronic kidney disease (CKD). Recently, two multinational, randomized, double-blind, placebo-controlled, phase 3 trials (Evaluating Prevention of Progression in Chronic Kidney Disease [EPPIC] trials) examined AST-120's efficacy in slowing CKD progression. This study assessed the efficacy of AST-120 in the subgroup of patients from the United States of America (USA) in the EPPIC trials. In the EPPIC trials, 2035 patients with moderate to severe CKD were studied, of which 583 were from the USA. The patients were randomly assigned to two groups of equal size that were treated with AST-120 or placebo (9 g/day). The primary end point was a composite of dialysis initiation, kidney transplantation, or serum creatinine doubling. The Kaplan-Meier curve for the time to achieve the primary end point in the placebo-treated patients from the USA was similar to that projected before the study. The per protocol subgroup analysis of the population from the USA which included patients with compliance rates of ≥67 % revealed a significant difference between the treatment groups in the time to achieve the primary end point (Hazard Ratio, 0.74; 95 % Confidence Interval, 0.56-0.97). This post hoc subgroup analysis of EPPIC study data suggests that treatment with AST-120 might delay the time to primary end point in CKD patients from the USA. A further randomized controlled trial in progressive CKD patients in the USA is necessary to confirm the beneficial effect of adding AST-120 to standard therapy regimens. ClinicalTrials.gov NCT00500682 ; NCT00501046 .
Detection of longitudinal visual field progression in glaucoma using machine learning.
Yousefi, Siamak; Kiwaki, Taichi; Zheng, Yuhui; Suigara, Hiroki; Asaoka, Ryo; Murata, Hiroshi; Lemij, Hans; Yamanishi, Kenji
2018-06-16
Global indices of standard automated perimerty are insensitive to localized losses, while point-wise indices are sensitive but highly variable. Region-wise indices sit in between. This study introduces a machine-learning-based index for glaucoma progression detection that outperforms global, region-wise, and point-wise indices. Development and comparison of a prognostic index. Visual fields from 2085 eyes of 1214 subjects were used to identify glaucoma progression patterns using machine learning. Visual fields from 133 eyes of 71 glaucoma patients were collected 10 times over 10 weeks to provide a no-change, test-retest dataset. The parameters of all methods were identified using visual field sequences in the test-retest dataset to meet fixed 95% specificity. An independent dataset of 270 eyes of 136 glaucoma patients and survival analysis were utilized to compare methods. The time to detect progression in 25% of the eyes in the longitudinal dataset using global mean deviation (MD) was 5.2 years (95% confidence interval, 4.1 - 6.5 years); 4.5 years (4.0 - 5.5) using region-wise, 3.9 years (3.5 - 4.6) using point-wise, and 3.5 years (3.1 - 4.0) using machine learning analysis. The time until 25% of eyes showed subsequently confirmed progression after two additional visits were included were 6.6 years (5.6 - 7.4 years), 5.7 years (4.8 - 6.7), 5.6 years (4.7 - 6.5), and 5.1 years (4.5 - 6.0) for global, region-wise, point-wise, and machine learning analyses, respectively. Machine learning analysis detects progressing eyes earlier than other methods consistently, with or without confirmation visits. In particular, machine learning detects more slowly progressing eyes than other methods. Copyright © 2018 Elsevier Inc. All rights reserved.
Real time analysis of voiced sounds
NASA Technical Reports Server (NTRS)
Hong, J. P. (Inventor)
1976-01-01
A power spectrum analysis of the harmonic content of a voiced sound signal is conducted in real time by phase-lock-loop tracking of the fundamental frequency, (f sub 0) of the signal and successive harmonics (h sub 1 through h sub n) of the fundamental frequency. The analysis also includes measuring the quadrature power and phase of each frequency tracked, differentiating the power measurements of the harmonics in adjacent pairs, and analyzing successive differentials to determine peak power points in the power spectrum for display or use in analysis of voiced sound, such as for voice recognition.
Park, Chihyun; Yun, So Jeong; Ryu, Sung Jin; Lee, Soyoung; Lee, Young-Sam; Yoon, Youngmi; Park, Sang Chul
2017-03-15
Cellular senescence irreversibly arrests growth of human diploid cells. In addition, recent studies have indicated that senescence is a multi-step evolving process related to important complex biological processes. Most studies analyzed only the genes and their functions representing each senescence phase without considering gene-level interactions and continuously perturbed genes. It is necessary to reveal the genotypic mechanism inferred by affected genes and their interaction underlying the senescence process. We suggested a novel computational approach to identify an integrative network which profiles an underlying genotypic signature from time-series gene expression data. The relatively perturbed genes were selected for each time point based on the proposed scoring measure denominated as perturbation scores. Then, the selected genes were integrated with protein-protein interactions to construct time point specific network. From these constructed networks, the conserved edges across time point were extracted for the common network and statistical test was performed to demonstrate that the network could explain the phenotypic alteration. As a result, it was confirmed that the difference of average perturbation scores of common networks at both two time points could explain the phenotypic alteration. We also performed functional enrichment on the common network and identified high association with phenotypic alteration. Remarkably, we observed that the identified cell cycle specific common network played an important role in replicative senescence as a key regulator. Heretofore, the network analysis from time series gene expression data has been focused on what topological structure was changed over time point. Conversely, we focused on the conserved structure but its context was changed in course of time and showed it was available to explain the phenotypic changes. We expect that the proposed method will help to elucidate the biological mechanism unrevealed by the existing approaches.
Moving Average Models with Bivariate Exponential and Geometric Distributions.
1985-03-01
ordinary time series and of point processes. Developments in Statistics, Vol. 1, P.R. Krishnaiah , ed. Academic Press, New York. [9] Esary, J.D. and...valued and discrete - valued time series with ARMA correlation structure. Multivariate Analysis V, P.R. Krishnaiah , ed. North-Holland. 151-166. [28
Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.
2006-01-01
We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.
Stochastic modeling of hourly rainfall times series in Campania (Italy)
NASA Astrophysics Data System (ADS)
Giorgio, M.; Greco, R.
2009-04-01
Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil protection agency meteorological warning network. ACKNOWLEDGEMENTS The research was co-financed by the Italian Ministry of University, by means of the PRIN 2006 PRIN program, within the research project entitled ‘Definition of critical rainfall thresholds for destructive landslides for civil protection purposes'. REFERENCES Cowpertwait, P.S.P., Kilsby, C.G. and O'Connell, P.E., 2002. A space-time Neyman-Scott model of rainfall: Empirical analysis of extremes, Water Resources Research, 38(8):1-14. Salas, J.D., 1992. Analysis and modeling of hydrological time series, in D.R. Maidment, ed., Handbook of Hydrology, McGraw-Hill, New York. Heneker, T.M., Lambert, M.F. and Kuczera G., 2001. A point rainfall model for risk-based design, Journal of Hydrology, 247(1-2):54-71.
Computer Analysis of 400 HZ Aircraft Electrical Generator Test Data.
1980-06-01
Data Acquisition System. ............ 6 3 Voltage Waveform with Data Points. ....... 19 14 Zero Crossover Interpolation. ........ 20 5 Numerical...difference between successive positive-sloped zero crossovers of the waveform. However, the exact time of zero crossover is not known. This is because...data sampling and the generator output are not synchronized. This unsynchronization means that data points which correspond with an exact zero crossover
Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials
2017-04-06
information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...results. The programs are designed for experimental datasets with multiple drive and response points and have proven effective even for systems with... designed for experimental datasets with multiple drive and response points and have proven effective even for systems with numerous closely-spaced
Zamunér, Antonio R.; Catai, Aparecida M.; Martins, Luiz E. B.; Sakabe, Daniel I.; Silva, Ester Da
2013-01-01
Background The second heart rate (HR) turn point has been extensively studied, however there are few studies determining the first HR turn point. Also, the use of mathematical and statistical models for determining changes in dynamic characteristics of physiological variables during an incremental cardiopulmonary test has been suggested. Objectives To determine the first turn point by analysis of HR, surface electromyography (sEMG), and carbon dioxide output () using two mathematical models and to compare the results to those of the visual method. Method Ten sedentary middle-aged men (53.9±3.2 years old) were submitted to cardiopulmonary exercise testing on an electromagnetic cycle ergometer until exhaustion. Ventilatory variables, HR, and sEMG of the vastus lateralis were obtained in real time. Three methods were used to determine the first turn point: 1) visual analysis based on loss of parallelism between and oxygen uptake (); 2) the linear-linear model, based on fitting the curves to the set of data (Lin-Lin ); 3) a bi-segmental linear regression of Hinkley' s algorithm applied to HR (HMM-HR), (HMM- ), and sEMG data (HMM-RMS). Results There were no differences between workload, HR, and ventilatory variable values at the first ventilatory turn point as determined by the five studied parameters (p>0.05). The Bland-Altman plot showed an even distribution of the visual analysis method with Lin-Lin , HMM-HR, HMM-CO2, and HMM-RMS. Conclusion The proposed mathematical models were effective in determining the first turn point since they detected the linear pattern change and the deflection point of , HR responses, and sEMG. PMID:24346296
Zamunér, Antonio R; Catai, Aparecida M; Martins, Luiz E B; Sakabe, Daniel I; Da Silva, Ester
2013-01-01
The second heart rate (HR) turn point has been extensively studied, however there are few studies determining the first HR turn point. Also, the use of mathematical and statistical models for determining changes in dynamic characteristics of physiological variables during an incremental cardiopulmonary test has been suggested. To determine the first turn point by analysis of HR, surface electromyography (sEMG), and carbon dioxide output (VCO2) using two mathematical models and to compare the results to those of the visual method. Ten sedentary middle-aged men (53.9 ± 3.2 years old) were submitted to cardiopulmonary exercise testing on an electromagnetic cycle ergometer until exhaustion. Ventilatory variables, HR, and sEMG of the vastus lateralis were obtained in real time. Three methods were used to determine the first turn point: 1) visual analysis based on loss of parallelism between VCO2 and oxygen uptake (VO2); 2) the linear-linear model, based on fitting the curves to the set of VCO2 data (Lin-LinVCO2); 3) a bi-segmental linear regression of Hinkley's algorithm applied to HR (HMM-HR), VCO2 (HMM-VCO2), and sEMG data (HMM-RMS). There were no differences between workload, HR, and ventilatory variable values at the first ventilatory turn point as determined by the five studied parameters (p>0.05). The Bland-Altman plot showed an even distribution of the visual analysis method with Lin-LinVCO2, HMM-HR, HMM-VCO2, and HMM-RMS. The proposed mathematical models were effective in determining the first turn point since they detected the linear pattern change and the deflection point of VCO2, HR responses, and sEMG.
Meng, Yu; Li, Gang; Gao, Yaozong; Lin, Weili; Shen, Dinggang
2016-11-01
Longitudinal neuroimaging analysis of the dynamic brain development in infants has received increasing attention recently. Many studies expect a complete longitudinal dataset in order to accurately chart the brain developmental trajectories. However, in practice, a large portion of subjects in longitudinal studies often have missing data at certain time points, due to various reasons such as the absence of scan or poor image quality. To make better use of these incomplete longitudinal data, in this paper, we propose a novel machine learning-based method to estimate the subject-specific, vertex-wise cortical morphological attributes at the missing time points in longitudinal infant studies. Specifically, we develop a customized regression forest, named dynamically assembled regression forest (DARF), as the core regression tool. DARF ensures the spatial smoothness of the estimated maps for vertex-wise cortical morphological attributes and also greatly reduces the computational cost. By employing a pairwise estimation followed by a joint refinement, our method is able to fully exploit the available information from both subjects with complete scans and subjects with missing scans for estimation of the missing cortical attribute maps. The proposed method has been applied to estimating the dynamic cortical thickness maps at missing time points in an incomplete longitudinal infant dataset, which includes 31 healthy infant subjects, each having up to five time points in the first postnatal year. The experimental results indicate that our proposed framework can accurately estimate the subject-specific vertex-wise cortical thickness maps at missing time points, with the average error less than 0.23 mm. Hum Brain Mapp 37:4129-4147, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Time course of the acute effects of core stabilisation exercise on seated postural control.
Lee, Jordan B; Brown, Stephen H M
2017-09-20
Core stabilisation exercises are often promoted for purposes ranging from general fitness to high-performance athletics, and the prevention and rehabilitation of back troubles. These exercises, when performed properly, may have the potential to enhance torso postural awareness and control, yet the potential for achieving immediate gains has not been completely studied. Fourteen healthy young participants performed a single bout of non-fatiguing core stabilisation exercise that consisted of repeated sets of 2 isometric exercises, the side bridge and the four-point contralateral arm-and-leg extension. Seated postural control, using an unstable balance platform on top of a force plate, was assessed before and after exercise, including multiple time points within a 20-minute follow-up period. Nine standard postural control variables were calculated at each time point, including sway displacement and velocity ranges, root mean squares and cumulative path length. Statistical analysis showed that none of the postural control variables were significantly different at any time point following completion of core stabilisation exercise. Thus, we conclude that a single bout of acute core stabilisation exercise is insufficient to immediately improve seated trunk postural control in young healthy individuals.
2012-01-01
A lumped model of neural activity in neocortex is studied to identify regions of multi-stability of both steady states and periodic solutions. Presence of both steady states and periodic solutions is considered to correspond with epileptogenesis. The model, which consists of two delay differential equations with two fixed time lags is mainly studied for its dependency on varying connection strength between populations. Equilibria are identified, and using linear stability analysis, all transitions are determined under which both trivial and non-trivial fixed points lose stability. Periodic solutions arising at some of these bifurcations are numerically studied with a two-parameter bifurcation analysis. PMID:22655859
Dal-Ré, Rafael; Ross, Joseph S; Marušić, Ana
2016-07-01
To examine compliance with International Committee of Medical Journal Editors' (ICMJE) policy on prospective trial registration along with predictors of compliance. Cross-sectional analysis of all articles reporting trial results published in the six highest-impact general medicine journals in January-June 2014 that were registered in a public trial registry. The main outcome measure was compliance with ICMJE policy. The time frame for trial primary end point ascertainment was used to assess whether retrospective registration could have allowed changing of primary end points following an interim analysis. Forty of 144 (28%) articles did not comply with the ICMJE policy. Trials of non-FDA-regulated interventions were less compliant than trials of FDA-regulated interventions (i.e., medicines, medical devices) (42% vs. 21%; P = 0.016). Twenty-nine of these 40 (72%; 20% overall) were registered before any interim analysis of primary end points could have been conducted; 11 (28%; 8% overall) were registered after primary end point ascertainment, such that investigators could have had the opportunity to conduct an interim analysis before trial registration. Twenty-eight percent of trials published in high-impact journals were retrospectively registered including nearly 10% that were registered after primary end point ascertainment could have had taken place. Prospective registration should be prompted and enforced to ensure transparency and accountability in clinical research. Copyright © 2016 Elsevier Inc. All rights reserved.
Daniel, Floréal; Mounier, Aurélie; Pérez-Arantegui, Josefina; Pardos, Carlos; Prieto-Taboada, Nagore; Fdez-Ortiz de Vallejuelo, Silvia; Castro, Kepa
2017-06-01
The development of non-invasive techniques for the characterization of pigments is crucial in order to preserve the integrity of the artwork. In this sense, the usefulness of hyperspectral imaging was demonstrated. It allows pigment characterization of the whole painting. However, it also sometimes requires the complementation of other point-by-point techniques. In the present article, the advantages of hyperspectral imaging over point-by-point spectroscopic analysis were evaluated. For that purpose, three paintings were analysed by hyperspectral imaging, handheld X-ray fluorescence and handheld Raman spectroscopy in order to determine the best non-invasive technique for pigment identifications. Thanks to this work, the main pigments used in Aragonese artworks, and especially in Goya's paintings, were identified and mapped by imaging reflection spectroscopy. All the analysed pigments corresponded to those used at the time of Goya. Regarding the techniques used, the information obtained by the hyperspectral imaging and point-by-point analysis has been, in general, different and complementary. Given this fact, selecting only one technique is not recommended, and the present work demonstrates the usefulness of the combination of all the techniques used as the best non-invasive methodology for the pigments' characterization. Moreover, the proposed methodology is a relatively quick procedure that allows a larger number of Goya's paintings in the museum to be surveyed, increasing the possibility of obtaining significant results and providing a chance for extensive comparisons, which are relevant from the point of view of art history issues.
Experimental results for the rapid determination of the freezing point of fuels
NASA Technical Reports Server (NTRS)
Mathiprakasam, B.
1984-01-01
Two methods for the rapid determination of the freezing point of fuels were investigated: an optical method, which detected the change in light transmission from the disappearance of solid particles in the melted fuel; and a differential thermal analysis (DTA) method, which sensed the latent heat of fusion. A laboratory apparatus was fabricated to test the two methods. Cooling was done by thermoelectric modules using an ice-water bath as a heat sink. The DTA method was later modified to eliminate the reference fuel. The data from the sample were digitized and a point of inflection, which corresponds to the ASTM D-2386 freezing point (final melting point), was identified from the derivative. The apparatus was modifified to cool the fuel to -60 C and controls were added for maintaining constant cooling rate, rewarming rate, and hold time at minimum temperature. A parametric series of tests were run for twelve fuels with freezing points from -10 C to -50 C, varying cooling rate, rewarming rate, and hold time. Based on the results, an optimum test procedure was established. The results showed good agreement with ASTM D-2386 freezing point and differential scanning calorimetry results.
Emotion Regulation Profiles, Temperament, and Adjustment Problems in Preadolescents
Zalewski, Maureen; Lengua, Liliana J.; Trancik, Anika; Wilson, Anna C.; Bazinet, Alissa
2014-01-01
The longitudinal relations of emotion regulation profiles to temperament and adjustment in a community sample of preadolescents (N = 196, 8–11 years at Time 1) were investigated using person-oriented latent profile analysis (LPA). Temperament, emotion regulation, and adjustment were measured at 3 different time points, with each time point occurring 1 year apart. LPA identified 5 frustration and 4 anxiety regulation profiles based on children’s physiological, behavioral, and self-reported reactions to emotion-eliciting tasks. The relation of effortful control to conduct problems was mediated by frustration regulation profiles, as was the relation of effortful control to depression. Anxiety regulation profiles did not mediate relations between temperament and adjustment. PMID:21413935
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
Omori, Yoshinori; Honmou, Osamu; Harada, Kuniaki; Suzuki, Junpei; Houkin, Kiyohiro; Kocsis, Jeffery D
2008-10-21
The systemic injection of human mesenchymal stem cells (hMSCs) prepared from adult bone marrow has therapeutic benefits after cerebral artery occlusion in rats, and may have multiple therapeutic effects at various sites and times within the lesion as the cells respond to a particular pathological microenvironment. However, the comparative therapeutic benefits of multiple injections of hMSCs at different time points after cerebral artery occlusion in rats remain unclear. In this study, we induced middle cerebral artery occlusion (MCAO) in rats using intra-luminal vascular occlusion, and infused hMSCs intravenously at a single 6 h time point (low and high cell doses) and various multiple time points after MCAO. From MRI analyses lesion volume was reduced in all hMSC cell injection groups as compared to serum alone injections. However, the greatest therapeutic benefit was achieved following a single high cell dose injection at 6 h post-MCAO, rather than multiple lower cell infusions over multiple time points. Three-dimensional analysis of capillary vessels in the lesion indicated that the capillary volume was equally increased in all of the cell-injected groups. Thus, differences in functional outcome in the hMSC transplantation subgroups are not likely the result of differences in angiogenesis, but rather from differences in neuroprotective effects.
Enhanced t -3/2 long-time tail for the stress-stress time correlation function
NASA Astrophysics Data System (ADS)
Evans, Denis J.
1980-01-01
Nonequilibrium molecular dynamics is used to calculate the spectrum of shear viscosity for a Lennard-Jones fluid. The calculated zero-frequency shear viscosity agrees well with experimental argon results for the two state points considered. The low-frequency behavior of shear viscosity is dominated by an ω 1/2 cusp. Analysis of the form of this cusp reveals that the stress-stress time correlation function exhibits a t -3/2 "long-time tail." It is shown that for the state points studied, the amplitude of this long-time tail is between 12 and 150 times larger than what has been predicted theoretically. If the low-frequency results are truly asymptotic, they imply that the cross and potential contributions to the Kubo-Green integrand for shear viscosity exhibit a t -3/2 long-time tail. This result contradicts the established theory of such processes.
Time-dependent friction and the mechanics of stick-slip
Dieterich, J.H.
1978-01-01
Time-dependent increase of static friction is characteristic of rock friction undera variety of experimental circumstances. Data presented here show an analogous velocity-dependent effect. A theor of friction is proposed that establishes a common basis for static and sliding friction. Creep at points of contact causes increases in friction that are proportional to the logarithm of the time that the population of points of contact exist. For static friction that time is the time of stationary contact. For sliding friction the time of contact is determined by the critical displacement required to change the population of contacts and the slip velocity. An analysis of a one-dimensional spring and slider system shows that experimental observations establishing the transition from stable sliding to stick-slip to be a function of normal stress, stiffness and surface finish are a consequence of time-dependent friction. ?? 1978 Birkha??user Verlag.
Some analysis on the diurnal variation of rainfall over the Atlantic Ocean
NASA Technical Reports Server (NTRS)
Gill, T.; Perng, S.; Hughes, A.
1981-01-01
Data collected from the GARP Atlantic Tropical Experiment (GATE) was examined. The data were collected from 10,000 grid points arranged as a 100 x 100 array; each grid covered a 4 square km area. The amount of rainfall was measured every 15 minutes during the experiment periods using c-band radars. Two types of analyses were performed on the data: analysis of diurnal variation was done on each of grid points based on the rainfall averages at noon and at midnight, and time series analysis on selected grid points based on the hourly averages of rainfall. Since there are no known distribution model which best describes the rainfall amount, nonparametric methods were used to examine the diurnal variation. Kolmogorov-Smirnov test was used to test if the rainfalls at noon and at midnight have the same statistical distribution. Wilcoxon signed-rank test was used to test if the noon rainfall is heavier than, equal to, or lighter than the midnight rainfall. These tests were done on each of the 10,000 grid points at which the data are available.
Reischauer, Carolin; Patzwahl, René; Koh, Dow-Mu; Froehlich, Johannes M; Gutzeit, Andreas
2018-04-01
To evaluate whole-lesion volumetric texture analysis of apparent diffusion coefficient (ADC) maps for assessing treatment response in prostate cancer bone metastases. Texture analysis is performed in 12 treatment-naïve patients with 34 metastases before treatment and at one, two, and three months after the initiation of androgen deprivation therapy. Four first-order and 19 second-order statistical texture features are computed on the ADC maps in each lesion at every time point. Repeatability, inter-patient variability, and changes in the feature values under therapy are investigated. Spearman rank's correlation coefficients are calculated across time to demonstrate the relationship between the texture features and the serum prostate specific antigen (PSA) levels. With few exceptions, the texture features exhibited moderate to high precision. At the same time, Friedman's tests revealed that all first-order and second-order statistical texture features changed significantly in response to therapy. Thereby, the majority of texture features showed significant changes in their values at all post-treatment time points relative to baseline. Bivariate analysis detected significant correlations between the great majority of texture features and the serum PSA levels. Thereby, three first-order and six second-order statistical features showed strong correlations with the serum PSA levels across time. The findings in the present work indicate that whole-tumor volumetric texture analysis may be utilized for response assessment in prostate cancer bone metastases. The approach may be used as a complementary measure for treatment monitoring in conjunction with averaged ADC values. Copyright © 2018 Elsevier B.V. All rights reserved.
Testing deformation hypotheses by constraints on a time series of geodetic observations
NASA Astrophysics Data System (ADS)
Velsink, Hiddo
2018-01-01
In geodetic deformation analysis observations are used to identify form and size changes of a geodetic network, representing objects on the earth's surface. The network points are monitored, often continuously, because of suspected deformations. A deformation may affect many points during many epochs. The problem is that the best description of the deformation is, in general, unknown. To find it, different hypothesised deformation models have to be tested systematically for agreement with the observations. The tests have to be capable of stating with a certain probability the size of detectable deformations, and to be datum invariant. A statistical criterion is needed to find the best deformation model. Existing methods do not fulfil these requirements. Here we propose a method that formulates the different hypotheses as sets of constraints on the parameters of a least-squares adjustment model. The constraints can relate to subsets of epochs and to subsets of points, thus combining time series analysis and congruence model analysis. The constraints are formulated as nonstochastic observations in an adjustment model of observation equations. This gives an easy way to test the constraints and to get a quality description. The proposed method aims at providing a good discriminating method to find the best description of a deformation. The method is expected to improve the quality of geodetic deformation analysis. We demonstrate the method with an elaborate example.
Lilja, Heidi E; Morrison, Wayne A; Han, Xiao-Lian; Palmer, Jason; Taylor, Caroline; Tee, Richard; Möller, Andreas; Thompson, Erik W; Abberton, Keren M
2013-05-15
Tissue engineering and cell implantation therapies are gaining popularity because of their potential to repair and regenerate tissues and organs. To investigate the role of inflammatory cytokines in new tissue development in engineered tissues, we have characterized the nature and timing of cell populations forming new adipose tissue in a mouse tissue engineering chamber (TEC) and characterized the gene and protein expression of cytokines in the newly developing tissues. EGFP-labeled bone marrow transplant mice and MacGreen mice were implanted with TEC for periods ranging from 0.5 days to 6 weeks. Tissues were collected at various time points and assessed for cytokine expression through ELISA and mRNA analysis or labeled for specific cell populations in the TEC. Macrophage-derived factors, such as monocyte chemotactic protein-1 (MCP-1), appear to induce adipogenesis by recruiting macrophages and bone marrow-derived precursor cells to the TEC at early time points, with a second wave of nonbone marrow-derived progenitors. Gene expression analysis suggests that TNFα, LCN-2, and Interleukin 1β are important in early stages of neo-adipogenesis. Increasing platelet-derived growth factor and vascular endothelial cell growth factor expression at early time points correlates with preadipocyte proliferation and induction of angiogenesis. This study provides new information about key elements that are involved in early development of new adipose tissue.
Rauch, Geraldine; Kieser, Meinhard; Binder, Harald; Bayes-Genis, Antoni; Jahn-Eimermacher, Antje
2018-05-01
Composite endpoints combining several event types of clinical interest often define the primary efficacy outcome in cardiologic trials. They are commonly evaluated as time-to-first-event, thereby following the recommendations of regulatory agencies. However, to assess the patient's full disease burden and to identify preventive factors or interventions, subsequent events following the first one should be considered as well. This is especially important in cohort studies and RCTs with a long follow-up leading to a higher number of observed events per patients. So far, there exist no recommendations which approach should be preferred. Recently, the Cardiovascular Round Table of the European Society of Cardiology indicated the need to investigate "how to interpret results if recurrent-event analysis results differ […] from time-to-first-event analysis" (Anker et al., Eur J Heart Fail 18:482-489, 2016). This work addresses this topic by means of a systematic simulation study. This paper compares two common analysis strategies for composite endpoints differing with respect to the incorporation of recurrent events for typical data scenarios motivated by a clinical trial. We show that the treatment effects estimated from a time-to-first-event analysis (Cox model) and a recurrent-event analysis (Andersen-Gill model) can systematically differ, particularly in cardiovascular trials. Moreover, we provide guidance on how to interpret these results and recommend points to consider for the choice of a meaningful analysis strategy. When planning trials with a composite endpoint, researchers, and regulatory agencies should be aware that the model choice affects the estimated treatment effect and its interpretation.
Triatomine Infestation in Guatemala: Spatial Assessment after Two Rounds of Vector Control
Manne, Jennifer; Nakagawa, Jun; Yamagata, Yoichi; Goehler, Alexander; Brownstein, John S.; Castro, Marcia C.
2012-01-01
In 2000, the Guatemalan Ministry of Health initiated a Chagas disease program to control Rhodnius prolixus and Triatoma dimidiata by periodic house spraying with pyrethroid insecticides to characterize infestation patterns and analyze the contribution of programmatic practices to these patterns. Spatial infestation patterns at three time points were identified using the Getis-Ord Gi*(d) test. Logistic regression was used to assess predictors of reinfestation after pyrethroid insecticide administration. Spatial analysis showed high and low clusters of infestation at three time points. After two rounds of spray, 178 communities persistently fell in high infestation clusters. A time lapse between rounds of vector control greater than 6 months was associated with 1.54 (95% confidence interval = 1.07–2.23) times increased odds of reinfestation after first spray, whereas a time lapse of greater than 1 year was associated with 2.66 (95% confidence interval = 1.85–3.83) times increased odds of reinfestation after first spray compared with localities where the time lapse was less than 180 days. The time lapse between rounds of vector control should remain under 1 year. Spatial analysis can guide targeted vector control efforts by enabling tracking of reinfestation hotspots and improved targeting of resources. PMID:22403315
The Trial Software version for DEMETER power spectrum files visualization and mapping
NASA Astrophysics Data System (ADS)
Lozbin, Anatoliy; Inchin, Alexander; Shpadi, Maxim
2010-05-01
In the frame of Kazakhstan's Scientific Space System creation for earthquakes precursors research, the hardware and software of DEMETER satellite was investigated. The data processing Software of DEMETER is based on package SWAN under IDL Virtual machine and realizes many features, but we can't find an important tool for the spectrograms analysis - space-time visualization of power spectrum files from electromagnetic devices as ICE and IMSC. For elimination of this problem we have developed Software which is offered to use. The DeSS (DEMETER Spectrogram Software) - it is Software for visualization, analysis and a mapping of power spectrum data from electromagnetic devices ICE and IMSC. The Software primary goal is to give the researcher friendly tool for the analysis of electromagnetic data from DEMETER Satellite for earthquake precursors and other ionosphere events researches. The Input data for DeSS Software is a power spectrum files: - Power spectrum of 1 component of the electric field in the VLF range (APID 1132); - Power spectrum of 1 component of the electric field in the HF range (APID 1134); - Power spectrum of 1 component of the magnetic field in the VLF range (APID 1137). The main features and operations of the software is possible: - various time and frequency filtration; - visualization of time dependence of signal intensity on fixed frequency; - spectral density visualization for fixed frequency range; - spectrogram autosize and smooth spectrogram; - the information in each point of the spectrogram: time, frequency and intensity; - the spectrum information in the separate window, consisting of 4 blocks; - data mapping with 6 range scale. On the map we can browse next information: - satellite orbit; - conjugate point at the satellite altitude; - north conjugate point at the altitude 110 km; - south conjugate point at the altitude 110 km. This is only trial software version to help the researchers and we always ready collaborate with scientists for software improvement. References: 1. D.Lagoutte, J.Y. Brochot, D. de Carvalho, L.Madrias and M. Parrot. DEMETER Microsatellite. Scientific Mission Center. Data product description. DMT-SP-9-CM-6054-LPC. 2. D.Lagoutte, J.Y. Brochot, P.Latremoliere. SWAN - Software for Waveform Analysis. LPCE/NI/003.E - Part 1 (User's guide), Part 2 (Analysis tools), Part 3 (User's project interface).
An unjustified benefit: immortal time bias in the analysis of time-dependent events.
Gleiss, Andreas; Oberbauer, Rainer; Heinze, Georg
2018-02-01
Immortal time bias is a problem arising from methodologically wrong analyses of time-dependent events in survival analyses. We illustrate the problem by analysis of a kidney transplantation study. Following patients from transplantation to death, groups defined by the occurrence or nonoccurrence of graft failure during follow-up seemingly had equal overall mortality. Such naive analysis assumes that patients were assigned to the two groups at time of transplantation, which actually are a consequence of occurrence of a time-dependent event later during follow-up. We introduce landmark analysis as the method of choice to avoid immortal time bias. Landmark analysis splits the follow-up time at a common, prespecified time point, the so-called landmark. Groups are then defined by time-dependent events having occurred before the landmark, and outcome events are only considered if occurring after the landmark. Landmark analysis can be easily implemented with common statistical software. In our kidney transplantation example, landmark analyses with landmarks set at 30 and 60 months clearly identified graft failure as a risk factor for overall mortality. We give further typical examples from transplantation research and discuss strengths and limitations of landmark analysis and other methods to address immortal time bias such as Cox regression with time-dependent covariables. © 2017 Steunstichting ESOT.
Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia
2017-07-28
Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.
Ricker, Martin; Peña Ramírez, Víctor M; von Rosen, Dietrich
2014-01-01
Growth curves are monotonically increasing functions that measure repeatedly the same subjects over time. The classical growth curve model in the statistical literature is the Generalized Multivariate Analysis of Variance (GMANOVA) model. In order to model the tree trunk radius (r) over time (t) of trees on different sites, GMANOVA is combined here with the adapted PL regression model Q = A · T+E, where for b ≠ 0 : Q = Ei[-b · r]-Ei[-b · r1] and for b = 0 : Q = Ln[r/r1], A = initial relative growth to be estimated, T = t-t1, and E is an error term for each tree and time point. Furthermore, Ei[-b · r] = ∫(Exp[-b · r]/r)dr, b = -1/TPR, with TPR being the turning point radius in a sigmoid curve, and r1 at t1 is an estimated calibrating time-radius point. Advantages of the approach are that growth rates can be compared among growth curves with different turning point radiuses and different starting points, hidden outliers are easily detectable, the method is statistically robust, and heteroscedasticity of the residuals among time points is allowed. The model was implemented with dendrochronological data of 235 Pinus montezumae trees on ten Mexican volcano sites to calculate comparison intervals for the estimated initial relative growth A. One site (at the Popocatépetl volcano) stood out, with A being 3.9 times the value of the site with the slowest-growing trees. Calculating variance components for the initial relative growth, 34% of the growth variation was found among sites, 31% among trees, and 35% over time. Without the Popocatépetl site, the numbers changed to 7%, 42%, and 51%. Further explanation of differences in growth would need to focus on factors that vary within sites and over time.
NASA Astrophysics Data System (ADS)
Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti
2016-04-01
Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the displacement fields. Displacement fields derived from both approaches are then combined and provide a better understanding of the landslide kinematics.
Maggi, Federico; Bosco, Domenico; Galetto, Luciana; Palmano, Sabrina; Marzachì, Cristina
2017-01-01
Analyses of space-time statistical features of a flavescence dorée (FD) epidemic in Vitis vinifera plants are presented. FD spread was surveyed from 2011 to 2015 in a vineyard of 17,500 m2 surface area in the Piemonte region, Italy; count and position of symptomatic plants were used to test the hypothesis of epidemic Complete Spatial Randomness and isotropicity in the space-time static (year-by-year) point pattern measure. Space-time dynamic (year-to-year) point pattern analyses were applied to newly infected and recovered plants to highlight statistics of FD progression and regression over time. Results highlighted point patterns ranging from disperse (at small scales) to aggregated (at large scales) over the years, suggesting that the FD epidemic is characterized by multiscale properties that may depend on infection incidence, vector population, and flight behavior. Dynamic analyses showed moderate preferential progression and regression along rows. Nearly uniform distributions of direction and negative exponential distributions of distance of newly symptomatic and recovered plants relative to existing symptomatic plants highlighted features of vector mobility similar to Brownian motion. These evidences indicate that space-time epidemics modeling should include environmental setting (e.g., vineyard geometry and topography) to capture anisotropicity as well as statistical features of vector flight behavior, plant recovery and susceptibility, and plant mortality. PMID:28111581
Analysis of short single rest/activation epoch fMRI by self-organizing map neural network
NASA Astrophysics Data System (ADS)
Erberich, Stephan G.; Dietrich, Thomas; Kemeny, Stefan; Krings, Timo; Willmes, Klaus; Thron, Armin; Oberschelp, Walter
2000-04-01
Functional magnet resonance imaging (fMRI) has become a standard non invasive brain imaging technique delivering high spatial resolution. Brain activation is determined by magnetic susceptibility of the blood oxygen level (BOLD effect) during an activation task, e.g. motor, auditory and visual tasks. Usually box-car paradigms have 2 - 4 rest/activation epochs with at least an overall of 50 volumes per scan in the time domain. Statistical test based analysis methods need a large amount of repetitively acquired brain volumes to gain statistical power, like Student's t-test. The introduced technique based on a self-organizing neural network (SOM) makes use of the intrinsic features of the condition change between rest and activation epoch and demonstrated to differentiate between the conditions with less time points having only one rest and one activation epoch. The method reduces scan and analysis time and the probability of possible motion artifacts from the relaxation of the patients head. Functional magnet resonance imaging (fMRI) of patients for pre-surgical evaluation and volunteers were acquired with motor (hand clenching and finger tapping), sensory (ice application), auditory (phonological and semantic word recognition task) and visual paradigms (mental rotation). For imaging we used different BOLD contrast sensitive Gradient Echo Planar Imaging (GE-EPI) single-shot pulse sequences (TR 2000 and 4000, 64 X 64 and 128 X 128, 15 - 40 slices) on a Philips Gyroscan NT 1.5 Tesla MR imager. All paradigms were RARARA (R equals rest, A equals activation) with an epoch width of 11 time points each. We used the self-organizing neural network implementation described by T. Kohonen with a 4 X 2 2D neuron map. The presented time course vectors were clustered by similar features in the 2D neuron map. Three neural networks were trained and used for labeling with the time course vectors of one, two and all three on/off epochs. The results were also compared by using a Kolmogorov-Smirnov statistical test of all 66 time points. To remove non- periodical time courses from training an auto-correlation function and bandwidth limiting Fourier filtering in combination with Gauss temporal smoothing was used. None of the trained maps, with one, two and three epochs, were significantly different which indicates that the feature space of only one on/off epoch is sufficient to differentiate between the rest and task condition. We found, that without pre-processing of the data no meaningful results can be achieved because of the huge amount of the non-activated and background voxels represents the majority of the features and is therefore learned by the SOM. Thus it is crucial to remove unnecessary capacity load of the neural network by selection of the training input, using auto-correlation function and/or Fourier spectrum analysis. However by reducing the time points to one rest and one activation epoch either strong auto- correlation or a precise periodical frequency is vanishing. Self-organizing maps can be used to separate rest and activation epochs of with only a 1/3 of the usually acquired time points. Because of the nature of the SOM technique, the pattern or feature separation, only the presence of a state change between the conditions is necessary for differentiation. Also the variance of the individual hemodynamic response function (HRF) and the variance of the spatial different regional cerebral blood flow (rCBF) is learned from the subject and not compared with a fixed model done by statistical evaluation. We found that reducing the information to only a few time points around the BOLD effect was not successful due to delays of rCBF and the insufficient extension of the BOLD feature in the time space. Especially for patient routine observation and pre-surgical planing a reduced scan time is of interest.
Cosmic infinity: a dynamical system approach
NASA Astrophysics Data System (ADS)
Bouhmadi-López, Mariam; Marto, João; Morais, João; Silva, César M.
2017-03-01
Dynamical system techniques are extremely useful to study cosmology. It turns out that in most of the cases, we deal with finite isolated fixed points corresponding to a given cosmological epoch. However, it is equally important to analyse the asymptotic behaviour of the universe. On this paper, we show how this can be carried out for 3-form models. In fact, we show that there are fixed points at infinity mainly by introducing appropriate compactifications and defining a new time variable that washes away any potential divergence of the system. The richness of 3-form models allows us as well to identify normally hyperbolic non-isolated fixed points. We apply this analysis to three physically interesting situations: (i) a pre-inflationary era; (ii) an inflationary era; (iii) the late-time dark matter/dark energy epoch.
Thomson, James R; Kimmerer, Wim J; Brown, Larry R; Newman, Ken B; Mac Nally, Ralph; Bennett, William A; Feyrer, Frederick; Fleishman, Erica
2010-07-01
We examined trends in abundance of four pelagic fish species (delta smelt, longfin smelt, striped bass, and threadfin shad) in the upper San Francisco Estuary, California, USA, over 40 years using Bayesian change point models. Change point models identify times of abrupt or unusual changes in absolute abundance (step changes) or in rates of change in abundance (trend changes). We coupled Bayesian model selection with linear regression splines to identify biotic or abiotic covariates with the strongest associations with abundances of each species. We then refitted change point models conditional on the selected covariates to explore whether those covariates could explain statistical trends or change points in species abundances. We also fitted a multispecies change point model that identified change points common to all species. All models included hierarchical structures to model data uncertainties, including observation errors and missing covariate values. There were step declines in abundances of all four species in the early 2000s, with a likely common decline in 2002. Abiotic variables, including water clarity, position of the 2 per thousand isohaline (X2), and the volume of freshwater exported from the estuary, explained some variation in species' abundances over the time series, but no selected covariates could explain statistically the post-2000 change points for any species.
Dropout and Federal Graduation Rates 2013-2014. Research Brief. Volume 1407
ERIC Educational Resources Information Center
Froman, Terry
2015-01-01
The District conducts two kinds of dropout analyses every year in Miami-Dade County Public Schools. The "cross-sectional" analysis of student dropouts examines dropout rates among students enrolled in various grades at one point in time. A "longitudinal" analysis, also conducted annually, tracks a group of students in the same…
Dropout and Graduation Rates 2008-2009. Research Brief. Volume 0902
ERIC Educational Resources Information Center
Research Services, Miami-Dade County Public Schools, 2010
2010-01-01
The District conducts a "cross-sectional" analysis of student dropouts annually; it examines dropout rates among students enrolled in various grades at one point in time. A "longitudinal" analysis, also conducted annually, tracks a group of students in the same grade or cohort over a period of several years. Each method…
Dropout and Graduation Rates 2009-2010. Research Brief. Volume 1101
ERIC Educational Resources Information Center
Research Services, Miami-Dade County Public Schools, 2011
2011-01-01
The District conducts a "cross-sectional" analysis of student dropouts annually; it examines dropout rates among students enrolled in various grades at one point in time. A "longitudinal" analysis, also conducted annually, tracks a group of students in the same grade or cohort over a period of several years. Each method…
Dropout and Graduation Rates 2010-2011. Research Brief. Volume 1107
ERIC Educational Resources Information Center
Research Services, Miami-Dade County Public Schools, 2012
2012-01-01
The District conducts a "cross-sectional" analysis of student dropouts annually; it examines dropout rates among students enrolled in various grades at one point in time. A "longitudinal" analysis, also conducted annually, tracks a group of students in the same grade or cohort over a period of several years. Each method…
A Fantasy Theme Analysis of Nixon's "Checkers" Speech.
ERIC Educational Resources Information Center
Wells, William T.
1996-01-01
Applies fantasy theme analysis to Richard Nixon's "Checkers" speech. States that three major themes emerge: Nixon as Moral Model, Nixon as the American Dream, and Nixon as Patriot. Points out that each issue responds to allegations of dishonesty that were leveled against him at the time. Argues that Nixon's speech was accepted and…
Dropout and Graduation Rates 2007-2008. Research Brief. Volume 0804
ERIC Educational Resources Information Center
Research Services, Miami-Dade County Public Schools, 2009
2009-01-01
The District conducts a "cross-sectional" analysis of student dropouts annually; it examines dropout rates among students enrolled in various grades at one point in time. A "longitudinal" analysis, also conducted annually, tracks a group of students in the same grade or cohort over a period of several years. Each method…
ERIC Educational Resources Information Center
Bashkov, Bozhidar M.; Finney, Sara J.
2013-01-01
Traditional methods of assessing construct stability are reviewed and longitudinal mean and covariance structures (LMACS) analysis, a modern approach, is didactically illustrated using psychological entitlement data. Measurement invariance and latent variable stability results are interpreted, emphasizing substantive implications for educators and…
Educational Leadership Effectiveness: A Rasch Analysis
ERIC Educational Resources Information Center
Sinnema, Claire; Ludlow, Larry; Robinson, Viviane
2016-01-01
Purpose: The purposes of this paper are, first, to establish the psychometric properties of the ELP tool, and, second, to test, using a Rasch item response theory analysis, the hypothesized progression of challenge presented by the items included in the tool. Design/ Methodology/ Approach: Data were collected at two time points through a survey of…
Academic and Language Outcomes in Children after Traumatic Brain Injury: A Meta-Analysis
ERIC Educational Resources Information Center
Vu, Jennifer A.; Babikian, Talin; Asarnow, Robert F .
2011-01-01
Expanding on Babikian and Asarnow's (2009) meta-analytic study examining neurocognitive domains, this current meta-analysis examined academic and language outcomes at different time points post-traumatic brain injury (TBI) in children and adolescents. Although children with mild TBI exhibited no significant deficits, studies indicate that children…
Earthquake Forecasting Through Semi-periodicity Analysis of Labeled Point Processes
NASA Astrophysics Data System (ADS)
Quinteros Cartaya, C. B. M.; Nava Pichardo, F. A.; Glowacka, E.; Gomez-Trevino, E.
2015-12-01
Large earthquakes have semi-periodic behavior as result of critically self-organized processes of stress accumulation and release in some seismogenic region. Thus, large earthquakes in a region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. Nava et al., 2013 and Quinteros et al., 2013 realized that not all earthquakes in a given region need belong to the same sequence, since there can be more than one process of stress accumulation and release in it; they also proposed a method to identify semi-periodic sequences through analytic Fourier analysis. This work presents improvements on the above-mentioned method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification, which means that earthquake occurrence times are treated as a labeled point process; the estimation of appropriate upper limit uncertainties to use in forecasts; and the use of Bayesian analysis to evaluate the forecast performance. This improved method is applied to specific regions: the southwestern coast of Mexico, the northeastern Japan Arc, the San Andreas Fault zone at Parkfield, and northeastern Venezuela.
Guo, Jinhong; Pui, Tze Sian; Ban, Yong-Ling; Rahman, Abdur Rub Abdur; Kang, Yuejun
2013-12-01
Conventional Coulter counters have been introduced as an important tool in biological cell assays since several decades ago. Recently, the emerging portable Coulter counter has demonstrated its merits in point of care diagnostics, such as on chip detection and enumeration of circulating tumor cells (CTC). The working principle is based on the cell translocation time and amplitude of electrical current change that the cell induces. In this paper, we provide an analysis of a Coulter counter that evaluates the hydrodynamic and electrokinetic properties of polystyrene microparticles in a microfluidic channel. The hydrodynamic force and electrokinetic force are concurrently analyzed to determine the translocation time and the electrical current pulses induced by the particles. Finally, we characterize the chip performance for CTC detection. The experimental results validate the numerical analysis of the microfluidic chip. The presented model can provide critical insight and guidance for developing micro-Coulter counter for point of care prognosis.
ERIC Educational Resources Information Center
Bohanon, Hank; Fenning, Pamela; Hicks, Kira; Weber, Stacey; Thier, Kimberly; Aikins, Brigit; Morrissey, Kelly; Briggs, Alissa; Bartucci, Gina; McArdle, Lauren; Hoeper, Lisa; Irvin, Larry
2012-01-01
The purpose of this case study was to expand the literature base regarding the application of high school schoolwide positive behavior support in an urban setting for practitioners and policymakers to address behavior issues. In addition, the study describes the use of the Change Point Test as a method for analyzing time series data that are…
ERIC Educational Resources Information Center
Rock, Donald; Werts, Charles
The purpose of this study was to obtain information on both the number of individuals who retest and their patterns of score gain (or decrement) by sex and ability. Individuals who retested only once were found to gain about 26-27 points on the Graduate Record Examination (GRE) verbal test and about 23 points on the GRE quantitative test. This…
Land use change, and the implementation of best management practices to remedy the adverse effects of land use change, alter hydrologic patterns, contaminant loading and water quality in freshwater ecosystems. These changes are not constant over time, but vary in response to di...
USDA-ARS?s Scientific Manuscript database
We conduct a novel comprehensive investigation that seeks to prove the connection between spatial and time scales in surface soil moisture (SM) within the satellite footprint (~50 km). Modeled and measured point series at Yanco and Little Washita in situ networks are first decomposed into anomalies ...
Rate equation analysis and non-Hermiticity in coupled semiconductor laser arrays
NASA Astrophysics Data System (ADS)
Gao, Zihe; Johnson, Matthew T.; Choquette, Kent D.
2018-05-01
Optically coupled semiconductor laser arrays are described by coupled rate equations. The coupled mode equations and carrier densities are included in the analysis, which inherently incorporate the carrier-induced nonlinearities including gain saturation and amplitude-phase coupling. We solve the steady-state coupled rate equations and consider the cavity frequency detuning and the individual laser pump rates as the experimentally controlled variables. We show that the carrier-induced nonlinearities play a critical role in the mode control, and we identify gain contrast induced by cavity frequency detuning as a unique mechanism for mode control. Photon-mediated energy transfer between cavities is also discussed. Parity-time symmetry and exceptional points in this system are studied. Unbroken parity-time symmetry can be achieved by judiciously combining cavity detuning and unequal pump rates, while broken symmetry lies on the boundary of the optical locking region. Exceptional points are identified at the intersection between broken symmetry and unbroken parity-time symmetry.
Lee, Shang-Yi; Hung, Chih-Jen; Chen, Chih-Chieh; Wu, Chih-Cheng
2014-11-01
Postoperative nausea and vomiting as well as postoperative pain are two major concerns when patients undergo surgery and receive anesthetics. Various models and predictive methods have been developed to investigate the risk factors of postoperative nausea and vomiting, and different types of preventive managements have subsequently been developed. However, there continues to be a wide variation in the previously reported incidence rates of postoperative nausea and vomiting. This may have occurred because patients were assessed at different time points, coupled with the overall limitation of the statistical methods used. However, using survival analysis with Cox regression, and thus factoring in these time effects, may solve this statistical limitation and reveal risk factors related to the occurrence of postoperative nausea and vomiting in the following period. In this retrospective, observational, uni-institutional study, we analyzed the results of 229 patients who received patient-controlled epidural analgesia following surgery from June 2007 to December 2007. We investigated the risk factors for the occurrence of postoperative nausea and vomiting, and also assessed the effect of evaluating patients at different time points using the Cox proportional hazards model. Furthermore, the results of this inquiry were compared with those results using logistic regression. The overall incidence of postoperative nausea and vomiting in our study was 35.4%. Using logistic regression, we found that only sex, but not the total doses and the average dose of opioids, had significant effects on the occurrence of postoperative nausea and vomiting at some time points. Cox regression showed that, when patients consumed a higher average dose of opioids, this correlated with a higher incidence of postoperative nausea and vomiting with a hazard ratio of 1.286. Survival analysis using Cox regression showed that the average consumption of opioids played an important role in postoperative nausea and vomiting, a result not found by logistic regression. Therefore, the incidence of postoperative nausea and vomiting in patients cannot be reliably determined on the basis of a single visit at one point in time. Copyright © 2014. Published by Elsevier Taiwan.
Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S
2016-06-01
We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.
Huang, Kuo-Chen; Yeh, Po-Chan
2007-04-01
The present study investigated the effects of numeral size, spacing between targets, and exposure time on the discrimination performance by elderly and younger people using a liquid crystal display screen. Analysis showed size of numerals significantly affected discrimination, which increased with increasing numeral size. Spacing between targets also had a significant effect on discrimination, i.e., the larger the space between numerals, the better their discrimination. When the spacing between numerals increased to 4 or 5 points, however, discrimination did not increase beyond that for 3-point spacing. Although performance increased with increasing exposure time, the difference in discrimination at an exposure time of 0.8 vs 1.0 sec. was not significant. The accuracy by the elderly group was less than that by younger subjects.
Miró, Conrado; Baeza, Antonio; Madruga, María J; Periañez, Raul
2012-11-01
The objective of this work consisted of analysing the spatial and temporal evolution of two radionuclide concentrations in the Tagus River. Time-series analysis techniques and numerical modelling have been used in this study. (137)Cs and (90)Sr concentrations have been measured from 1994 to 1999 at several sampling points in Spain and Portugal. These radionuclides have been introduced into the river by the liquid releases from several nuclear power plants in Spain, as well as from global fallout. Time-series analysis techniques have allowed the determination of radionuclide transit times along the river, and have also pointed out the existence of temporal cycles of radionuclide concentrations at some sampling points, which are attributed to water management in the reservoirs placed along the Tagus River. A stochastic dispersion model, in which transport with water, radioactive decay and water-sediment interactions are solved through Monte Carlo methods, has been developed. Model results are, in general, in reasonable agreement with measurements. The model has finally been applied to the calculation of mean ages of radioactive content in water and sediments in each reservoir. This kind of model can be a very useful tool to support the decision-making process after an eventual emergency situation. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea
In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects inmore » all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.« less
Higuchi, Takahiro; Noritake, Atsushi; Yanagimoto, Yoshitoki; Kobayashi, Hodaka; Nakamura, Kae; Kaneko, Kazunari
2017-01-01
Children with autism spectrum disorders (ASD) who have neurodevelopmental impairments in social communication often refuse to go to school because of difficulties in learning in class. The exact cause of maladaptation to school in such children is unknown. We hypothesized that these children have difficulty in paying attention to objects at which teachers are pointing. We performed gaze behavior analysis of children with ASD to understand their difficulties in the classroom. The subjects were 26 children with ASD (19 boys and 7 girls; mean age, 8.6 years) and 27 age-matched children with typical development (TD) (14 boys and 13 girls; mean age, 8.2 years). We measured eye movements of the children while they performed free viewing of two movies depicting actual classes: a Japanese class in which a teacher pointed at cartoon characters and an arithmetic class in which the teacher pointed at geometric figures. In the analysis, we defined the regions of interest (ROIs) as the teacher’s face and finger, the cartoon characters and geometric figures at which the teacher pointed, and the classroom wall that contained no objects. We then compared total gaze time for each ROI between the children with ASD and TD by two-way ANOVA. Children with ASD spent less gaze time on the cartoon characters pointed at by the teacher; they spent more gaze time on the wall in both classroom scenes. We could differentiate children with ASD from those with TD almost perfectly by the proportion of total gaze time that children with ASD spent looking at the wall. These results suggest that children with ASD do not follow the teacher’s instructions in class and persist in gazing at inappropriate visual areas such as walls. Thus, they may have difficulties in understanding content in class, leading to maladaptation to school. PMID:28472111
Higuchi, Takahiro; Ishizaki, Yuko; Noritake, Atsushi; Yanagimoto, Yoshitoki; Kobayashi, Hodaka; Nakamura, Kae; Kaneko, Kazunari
2017-01-01
Children with autism spectrum disorders (ASD) who have neurodevelopmental impairments in social communication often refuse to go to school because of difficulties in learning in class. The exact cause of maladaptation to school in such children is unknown. We hypothesized that these children have difficulty in paying attention to objects at which teachers are pointing. We performed gaze behavior analysis of children with ASD to understand their difficulties in the classroom. The subjects were 26 children with ASD (19 boys and 7 girls; mean age, 8.6 years) and 27 age-matched children with typical development (TD) (14 boys and 13 girls; mean age, 8.2 years). We measured eye movements of the children while they performed free viewing of two movies depicting actual classes: a Japanese class in which a teacher pointed at cartoon characters and an arithmetic class in which the teacher pointed at geometric figures. In the analysis, we defined the regions of interest (ROIs) as the teacher's face and finger, the cartoon characters and geometric figures at which the teacher pointed, and the classroom wall that contained no objects. We then compared total gaze time for each ROI between the children with ASD and TD by two-way ANOVA. Children with ASD spent less gaze time on the cartoon characters pointed at by the teacher; they spent more gaze time on the wall in both classroom scenes. We could differentiate children with ASD from those with TD almost perfectly by the proportion of total gaze time that children with ASD spent looking at the wall. These results suggest that children with ASD do not follow the teacher's instructions in class and persist in gazing at inappropriate visual areas such as walls. Thus, they may have difficulties in understanding content in class, leading to maladaptation to school.
Estimation of error on the cross-correlation, phase and time lag between evenly sampled light curves
NASA Astrophysics Data System (ADS)
Misra, R.; Bora, A.; Dewangan, G.
2018-04-01
Temporal analysis of radiation from Astrophysical sources like Active Galactic Nuclei, X-ray Binaries and Gamma-ray bursts provides information on the geometry and sizes of the emitting regions. Establishing that two light-curves in different energy bands are correlated, and measuring the phase and time-lag between them is an important and frequently used temporal diagnostic. Generally the estimates are done by dividing the light-curves into large number of adjacent intervals to find the variance or by using numerically expensive simulations. In this work we have presented alternative expressions for estimate of the errors on the cross-correlation, phase and time-lag between two shorter light-curves when they cannot be divided into segments. Thus the estimates presented here allow for analysis of light-curves with relatively small number of points, as well as to obtain information on the longest time-scales available. The expressions have been tested using 200 light curves simulated from both white and 1 / f stochastic processes with measurement errors. We also present an application to the XMM-Newton light-curves of the Active Galactic Nucleus, Akn 564. The example shows that the estimates presented here allow for analysis of light-curves with relatively small (∼ 1000) number of points.
Churkin, Alexander; Barash, Danny
2008-01-01
Background RNAmute is an interactive Java application which, given an RNA sequence, calculates the secondary structure of all single point mutations and organizes them into categories according to their similarity to the predicted structure of the wild type. The secondary structure predictions are performed using the Vienna RNA package. A more efficient implementation of RNAmute is needed, however, to extend from the case of single point mutations to the general case of multiple point mutations, which may often be desired for computational predictions alongside mutagenesis experiments. But analyzing multiple point mutations, a process that requires traversing all possible mutations, becomes highly expensive since the running time is O(nm) for a sequence of length n with m-point mutations. Using Vienna's RNAsubopt, we present a method that selects only those mutations, based on stability considerations, which are likely to be conformational rearranging. The approach is best examined using the dot plot representation for RNA secondary structure. Results Using RNAsubopt, the suboptimal solutions for a given wild-type sequence are calculated once. Then, specific mutations are selected that are most likely to cause a conformational rearrangement. For an RNA sequence of about 100 nts and 3-point mutations (n = 100, m = 3), for example, the proposed method reduces the running time from several hours or even days to several minutes, thus enabling the practical application of RNAmute to the analysis of multiple-point mutations. Conclusion A highly efficient addition to RNAmute that is as user friendly as the original application but that facilitates the practical analysis of multiple-point mutations is presented. Such an extension can now be exploited prior to site-directed mutagenesis experiments by virologists, for example, who investigate the change of function in an RNA virus via mutations that disrupt important motifs in its secondary structure. A complete explanation of the application, called MultiRNAmute, is available at [1]. PMID:18445289
Hardware Model of a Shipboard Generator
2009-05-19
controller output PM motor power RM motor resistance Td derivative time constant Tf1 fuel valve time constant Tg1 governor time constant Tg2 governor...in speed, sending a response signal to the fuel valve that regulates gas turbine power. At this point, there is an inherent variation between the...basic response analysis [5]. 29 Electrical Power Rotor Inertia Amplifiers Fuel Valve Turbine Dynamics Rotational Friction and Windage
Normanno, Nicola; Pinto, Carmine; Taddei, Gianluigi; Gambacorta, Marcello; Castiglione, Francesca; Barberis, Massimo; Clemente, Claudio; Marchetti, Antonio
2013-06-01
The Italian Association of Medical Oncology (AIOM) and the Italian Society of Pathology and Cytology organized an external quality assessment (EQA) scheme for EGFR mutation testing in non-small-cell lung cancer. Ten specimens, including three small biopsies with known epidermal growth factor receptor (EGFR) mutation status, were validated in three referral laboratories and provided to 47 participating centers. The participants were requested to perform mutational analysis, using their usual method, and to submit results within a 4-week time frame. According to a predefined scoring system, two points were assigned to correct genotype and zero points to false-negative or false-positive results. The threshold to pass the EQA was set at higher than 18 of 20 points. Two rounds were preplanned. All participating centers submitted the results within the time frame. Polymerase chain reaction (PCR)/sequencing was the main methodology used (n = 37 laboratories), although a few centers did use pyrosequencing (n = 8) or real-time PCR (n = 2). A significant number of analytical errors were observed (n = 20), with a high frequency of false-positive results (n = 16). The lower scores were obtained for the small biopsies. Fourteen of 47 centers (30%) that did not pass the first round, having a score less than or equal to 18 points, used PCR/sequencing, whereas 10 of 10 laboratories, using pyrosequencing or real-time PCR, passed the first round. Eight laboratories passed the second round. Overall, 41of 47 centers (87%) passed the EQA. The results of the EQA for EGFR testing in non-small-cell lung cancer suggest that good quality EGFR mutational analysis is performed in Italian laboratories, although differences between testing methods were observed, especially for small biopsies.
Mathematical embryology: the fluid mechanics of nodal cilia
NASA Astrophysics Data System (ADS)
Smith, D. J.; Smith, A. A.; Blake, J. R.
2011-07-01
Left-right symmetry breaking is critical to vertebrate embryonic development; in many species this process begins with cilia-driven flow in a structure termed the `node'. Primary `whirling' cilia, tilted towards the posterior, transport morphogen-containing vesicles towards the left, initiating left-right asymmetric development. We review recent theoretical models based on the point-force stokeslet and point-torque rotlet singularities, explaining how rotation and surface-tilt produce directional flow. Analysis of image singularity systems enforcing the no-slip condition shows how tilted rotation produces a far-field `stresslet' directional flow, and how time-dependent point-force and time-independent point-torque models are in this respect equivalent. Associated slender body theory analysis is reviewed; this approach enables efficient and accurate simulation of three-dimensional time-dependent flow, time-dependence being essential in predicting features of the flow such as chaotic advection, which have subsequently been determined experimentally. A new model for the nodal flow utilising the regularized stokeslet method is developed, to model the effect of the overlying Reichert's membrane. Velocity fields and particle paths within the enclosed domain are computed and compared with the flow profiles predicted by previous `membrane-less' models. Computations confirm that the presence of the membrane produces flow-reversal in the upper region, but no continuous region of reverse flow close to the epithelium. The stresslet far-field is no longer evident in the membrane model, due to the depth of the cavity being of similar magnitude to the cilium length. Simulations predict that vesicles released within one cilium length of the epithelium are generally transported to the left via a `loopy drift' motion, sometimes involving highly unpredictable detours around leftward cilia [truncated
Parsimonious model for blood glucose level monitoring in type 2 diabetes patients.
Zhao, Fang; Ma, Yan Fen; Wen, Jing Xiao; DU, Yan Fang; Li, Chun Lin; Li, Guang Wei
2014-07-01
To establish the parsimonious model for blood glucose monitoring in patients with type 2 diabetes receiving oral hypoglycemic agent treatment. One hundred and fifty-nine adult Chinese type 2 diabetes patients were randomized to receive rapid-acting or sustained-release gliclazide therapy for 12 weeks. Their blood glucose levels were measured at 10 time points in a 24 h period before and after treatment, and the 24 h mean blood glucose levels were measured. Contribution of blood glucose levels to the mean blood glucose level and HbA1c was assessed by multiple regression analysis. The correlation coefficients of blood glucose level measured at 10 time points to the daily MBG were 0.58-0.74 and 0.59-0.79, respectively, before and after treatment (P<0.0001). The multiple stepwise regression analysis showed that the blood glucose levels measured at 6 of the 10 time points could explain 95% and 97% of the changes in MBG before and after treatment. The three blood glucose levels, which were measured at fasting, 2 h after breakfast and before dinner, of the 10 time points could explain 84% and 86% of the changes in MBG before and after treatment, but could only explain 36% and 26% of the changes in HbA1c before and after treatment, and they had a poorer correlation with the HbA1c than with the 24 h MBG. The blood glucose levels measured at fasting, 2 h after breakfast and before dinner truly reflected the change 24 h blood glucose level, suggesting that they are appropriate for the self-monitoring of blood glucose levels in diabetes patients receiving oral anti-diabetes therapy. Copyright © 2014 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.
Burnstein, Bryan D.; Steele, Russell J.; Shrier, Ian
2011-01-01
Context: Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. Objective: To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Design: Cohort study. Setting: Eighteen different Cirque du Soleil shows. Patients or Other Participants: Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Intervention(s): Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. Main Outcome Measure(s): We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Results: Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Conclusions: Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators. PMID:22488138
2002-09-01
starting point is the actual moment with a very low assumption of 1100 reached practitioners who actually perform internet research via the portal...Total 13.4 100% Research time saved hours a week 40 % of research time 29% % of research through internet 22% internet research time...in minutes 150.6 reduction in minutes 20 reduction in % of internet research time 13.3% reduction in % total time 0.8% Exchange
Li, Dongqing; Liang, Ji; Di, Yanming; Gong, Huili; Guo, Xiaoyu
2016-01-01
Cluster analysis (CA), discriminant analysis (DA), and principal component analysis/factor analysis (PCA/FA) were used to analyze the interannual, seasonal, and spatial variations of water quality from 1991 to 2011 in controlling points (Xinzhuang Bridge, Daguan Bridge) of the main rivers (Chaohe River, Baihe River) flowing into the Miyun Reservoir. The results demonstrated that total nitrogen (TN) and total phosphorus (TP) exceeded China National Standard II for surface water separately 5.08 times and 1 time. CA showed that the water quality could be divided into three interannual (IA) groups: IAI (1991-1995, 1998), IAII (1996-1997, 1999-2000, 2002-2006), and IAIII (2001, 2007-2011) and two seasonal clusters: dry season 1 (December), dry season 2 (January-February), and non-dry season (March-November). At interannual scale, the higher concentration of SO4 (2-) from industrial activities, atmospheric sedimentation, and fertilizer use in IAIII accelerated dissolution of carbonate, which increased Ca(2+), Mg(2+), total hardness (T-Hard), and total alkalinity (T-Alk). The decreasing trend of CODMn contributed to the establishment of sewage treatment plants and water and soil conservation in the Miyun upstream. The changing trend of NO3 (-)-N indicated increasing non-point pollution load of IAII and effective non-point pollution controlling of IAIII. Only one parameter T in the seasonal scale verified improved non-point pollution controlling. The major pollution in two controlling points was NO3 (-)-N, T-Hard, TN, and other ion pollution (SO4 (2-), F(-), Ca(2+), Mg(2+), T-Hard, T-Alk). Higher concentration of NO3 (-)-N in Xinzhuang and CODMn in Daguan indicated different controlling measures, especially controlling agriculture intensification in Chaohe River to decrease N pollution and decreasing water and soil loss and cage culture in Baihe River to weaken organic pollution. Controlling SO4 (2-) from industrial activity, atmospheric sedimentation and fertilizer use in watershed can effectively control Ca(2+), Mg(2+), T-Hard, and T-Alk.
String Mining in Bioinformatics
NASA Astrophysics Data System (ADS)
Abouelhoda, Mohamed; Ghanem, Moustafa
Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word "data-mining" is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages [20], fighting spam mails [50], detecting plagiarism [22], and spotting duplications in software systems [14].
String Mining in Bioinformatics
NASA Astrophysics Data System (ADS)
Abouelhoda, Mohamed; Ghanem, Moustafa
Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word “data-mining” is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages [20], fighting spam mails [50], detecting plagiarism [22], and spotting duplications in software systems [14].
Pi, Zhi-bing; Tan, Guan-xian; Wang, Jun-lu
2007-07-17
To observe the effect of hydroxyethyl starch (HES) 130/0.4 on S100B protein level and cerebral metabolism of oxygen in open cardiac surgery under cardiopulmonary bypass (CPB) and to explore whether it has the protective effect of 6%HES130/0.4 as priming solution on cerebral injury during CPB and explore the probable mechanism. Forty patients with atrioseptal defect or ventricular septal defect scheduled for elective surgical repair under CPB with moderate hypothermia were randomly divided into two equal groups: HES 130/0.4 group (HES group) in which HES 130/0.4 (voluven) was used as priming solution and gelatin group (GRL group) in which gelofusine (succinylated gelatin) was used as priming solution. ECG, heart rate (HR), blood pressure (BP), mean arterial pressure (MAP), central venous pressure (CVP), arterial partial pressure of oxygen (P(a)O(2),), arterial partial pressure of carbon dioxide (P(et)CO(2)) and body temperature (naso-pharyngeal and rectal) were continuously monitored during the operation. Blood samples were obtained from the central vein for determination of blood concentrations of S100B protein at the following time points: before CPB (T(0)), 20 minutes after the beginning of CPB (T(1)), immediately after the termination of CPB (T(2)), 60 minutes after the termination of CPB (T(3)), and 24 hours after the termination of CPB (T(4)). The serum S100B protein levels were measured by ELISA. At the same time points blood samples were obtained from the jugular vein and radial artery to undergo blood gas analysis and measurement of blood glucose, based on which the cerebral oxygen metabolic rate/cerebral metabolic rate of glucose (CMRO(2)/CMR(GLU)) was calculated. Compared with the time point of immediately before CPB (T(0)), The S100B protein level of the 2 groups began to increase since the time point T(1), peaked at the time point T(2), began to decrease gradually since the time point T(3), and were still significantly higher than those before CPB at the time point T(4) (all P < 0.01), and the S100B protein levels at different time points of the HES group were all significantly lower than those of the GEL group (all P < 0.01). The S(jv)O(2) and CMRO(2)/CMR(GLU) levels of both groups increased at the time point T(1), decreased at the time points T(2) and T(3), and then restored to normal at the time points T(4). In the GEL group there were no significant differences in the levels between any 2 different time points, however, in the HES group S(jv)O(2) and CMRO(2)/CMR(GLU) levels at T(1) was significantly higher than those at the other time points (P < 0.05 or P < 0.01). S100B protein increases significantly in open cardiac surgery under CPB. HES130/0.4 lowers the S100B protein levels from the beginning of CPB to one hour after the termination of CPB with the probable mechanism of improving the cerebral metabolism of oxygen. 6%HES130/0.4 as priming solution may play a protective role in reduction of cerebral injury during CPB and open cardiac surgery.
Schmuck, Sebastian; Mamach, Martin; Wilke, Florian; von Klot, Christoph A; Henkenberens, Christoph; Thackeray, James T; Sohns, Jan M; Geworski, Lilli; Ross, Tobias L; Wester, Hans-Juergen; Christiansen, Hans; Bengel, Frank M; Derlin, Thorsten
2017-06-01
The aims of this study were to gain mechanistic insights into prostate cancer biology using dynamic imaging and to evaluate the usefulness of multiple time-point Ga-prostate-specific membrane antigen (PSMA) I&T PET/CT for the assessment of primary prostate cancer before prostatectomy. Twenty patients with prostate cancer underwent Ga-PSMA I&T PET/CT before prostatectomy. The PET protocol consisted of early dynamic pelvic imaging, followed by static scans at 60 and 180 minutes postinjection (p.i.). SUVs, time-activity curves, quantitative analysis based on a 2-tissue compartment model, Patlak analysis, histopathology, and Gleason grading were compared between prostate cancer and benign prostate gland. Primary tumors were identified on both early dynamic and delayed imaging in 95% of patients. Tracer uptake was significantly higher in prostate cancer compared with benign prostate tissue at any time point (P ≤ 0.0003) and increased over time. Consequently, the tumor-to-nontumor ratio within the prostate gland improved over time (2.8 at 10 minutes vs 17.1 at 180 minutes p.i.). Tracer uptake at both 60 and 180 minutes p.i. was significantly higher in patients with higher Gleason scores (P < 0.01). The influx rate (Ki) was higher in prostate cancer than in reference prostate gland (0.055 [r = 0.998] vs 0.017 [r = 0.996]). Primary prostate cancer is readily identified on early dynamic and static delayed Ga-PSMA ligand PET images. The tumor-to-nontumor ratio in the prostate gland improves over time, supporting a role of delayed imaging for optimal visualization of prostate cancer.
Grimm, Alexandra; Meyer, Heiko; Nickel, Marcel D; Nittka, Mathias; Raithel, Esther; Chaudry, Oliver; Friedberger, Andreas; Uder, Michael; Kemmler, Wolfgang; Quick, Harald H; Engelke, Klaus
2018-06-01
The purpose of this study is to evaluate and compare 2-point (2pt), 3-point (3pt), and 6-point (6pt) Dixon magnetic resonance imaging (MRI) sequences with flexible echo times (TE) to measure proton density fat fraction (PDFF) within muscles. Two subject groups were recruited (G1: 23 young and healthy men, 31 ± 6 years; G2: 50 elderly men, sarcopenic, 77 ± 5 years). A 3-T MRI system was used to perform Dixon imaging on the left thigh. PDFF was measured with six Dixon prototype sequences: 2pt, 3pt, and 6pt sequences once with optimal TEs (in- and opposed-phase echo times), lower resolution, and higher bandwidth (optTE sequences) and once with higher image resolution (highRes sequences) and shortest possible TE, respectively. Intra-fascia PDFF content was determined. To evaluate the comparability among the sequences, Bland-Altman analysis was performed. The highRes 6pt Dixon sequences served as reference as a high correlation of this sequence to magnetic resonance spectroscopy has been shown before. The PDFF difference between the highRes 6pt Dixon sequence and the optTE 6pt, both 3pt, and the optTE 2pt was low (between 2.2% and 4.4%), however, not to the highRes 2pt Dixon sequence (33%). For the optTE sequences, difference decreased with the number of echoes used. In conclusion, for Dixon sequences with more than two echoes, the fat fraction measurement was reliable with arbitrary echo times, while for 2pt Dixon sequences, it was reliable with dedicated in- and opposed-phase echo timing. Copyright © 2018 Elsevier B.V. All rights reserved.
Zhou, Changxi; Wu, Lei; Liu, Qinghui; An, Huaijie; Jiang, Bin; Zuo, Fan; Zhang, Li; He, Yao
2017-01-01
Abstract This study aimed to evaluate the effects of psychological intervention and psychological plus drug intervention on smoking cessation among male smokers with single chronic diseases. A total of 509 male smokers were divided into psychological group (n = 290) and psychological plus drugs (n = 219) groups according to their will. The physicians provided free individual counseling and follow-up interviews with brief counseling for all the subjects. In addition to mental intervention, patients in psychological plus drug group also received bupropion hydrochloride or varenicline tartrate to quit smoking. Outcomes were self-reported, regarding the 7-day point prevalence on abstinence rate and continuous abstinence rates at 1-, 3-, and 6-month follow-up period. Data analyses were performed using intention-to-treat analysis and per protocol analysis. With regards to the 3 follow-up time points, 7-day point-prevalence abstinence rate in psychological plus drugs group was all higher than that in the psychological intervention group. Additionally, the 3-month continuous abstinence rate (21.4%) of the 6-month follow-up in the psychological group was not significantly higher than that (26.9%) in the psychological plus drugs group (P >.05 for all). Fagerström test score, stage of quitting smoking, perceived confidence or difficulty in quitting, and chronic disease types were independently correlated with 3-month continuous abstinence in the 6-month follow up (P <.05 for all). The results were similar between intentional analysis and protocol analysis. The psychological intervention and psychological plus drugs intervention exerted good effects on smoking cessation in a short time (1 month). Nevertheless, the advantages did not appear during long-time (6 months) follow-up. PMID:29049178
Rojo-Manaute, Jose Manuel; Capa-Grasa, Alberto; Del Cerro-Gutiérrez, Miguel; Martínez, Manuel Villanueva; Chana-Rodríguez, Francisco; Martín, Javier Vaquero
2012-03-01
Trigger digit surgery can be performed by an open approach using classic open surgery, by a wide-awake approach, or by sonographically guided first annular pulley release in day surgery and office-based ambulatory settings. Our goal was to perform a turnover and economic analysis of 3 surgical models. Two studies were conducted. The first was a turnover analysis of 57 patients allocated 4:4:1 into the surgical models: sonographically guided-office-based, classic open-day surgery, and wide-awake-office-based. Regression analysis for the turnover time was monitored for assessing stability (R(2) < .26). Second, on the basis of turnover times and hospital tariff revenues, we calculated the total costs, income to cost ratio, opportunity cost, true cost, true net income (primary variable), break-even points for sonographically guided fixed costs, and 1-way analysis for identifying thresholds among alternatives. Thirteen sonographically guided-office-based patients were withdrawn because of a learning curve influence. The wide-awake (n = 6) and classic (n = 26) models were compared to the last 25% of the sonographically guided group (n = 12), which showed significantly less mean turnover times, income to cost ratios 2.52 and 10.9 times larger, and true costs 75.48 and 20.92 times lower, respectively. A true net income break-even point happened after 19.78 sonographically guided-office-based procedures. Sensitivity analysis showed a threshold between wide-awake and last 25% sonographically guided true costs if the last 25% sonographically guided turnover times reached 65.23 and 27.81 minutes, respectively. However, this trial was underpowered. This trial comparing surgical models was underpowered and is inconclusive on turnover times; however, the sonographically guided-office-based approach showed shorter turnover times and better economic results with a quick recoup of the costs of sonographically assisted surgery.
High-fidelity modeling and impact footprint prediction for vehicle breakup analysis
NASA Astrophysics Data System (ADS)
Ling, Lisa
For decades, vehicle breakup analysis had been performed for space missions that used nuclear heater or power units in order to assess aerospace nuclear safety for potential launch failures leading to inadvertent atmospheric reentry. Such pre-launch risk analysis is imperative to assess possible environmental impacts, obtain launch approval, and for launch contingency planning. In order to accurately perform a vehicle breakup analysis, the analysis tool should include a trajectory propagation algorithm coupled with thermal and structural analyses and influences. Since such a software tool was not available commercially or in the public domain, a basic analysis tool was developed by Dr. Angus McRonald prior to this study. This legacy software consisted of low-fidelity modeling and had the capability to predict vehicle breakup, but did not predict the surface impact point of the nuclear component. Thus the main thrust of this study was to develop and verify the additional dynamics modeling and capabilities for the analysis tool with the objectives to (1) have the capability to predict impact point and footprint, (2) increase the fidelity in the prediction of vehicle breakup, and (3) reduce the effort and time required to complete an analysis. The new functions developed for predicting the impact point and footprint included 3-degrees-of-freedom trajectory propagation, the generation of non-arbitrary entry conditions, sensitivity analysis, and the calculation of impact footprint. The functions to increase the fidelity in the prediction of vehicle breakup included a panel code to calculate the hypersonic aerodynamic coefficients for an arbitrary-shaped body and the modeling of local winds. The function to reduce the effort and time required to complete an analysis included the calculation of node failure criteria. The derivation and development of these new functions are presented in this dissertation, and examples are given to demonstrate the new capabilities and the improvements made, with comparisons between the results obtained from the upgraded analysis tool and the legacy software wherever applicable.
Kamali, Fahimeh; Mirkhani, Hossein; Nematollahi, Ahmadreza; Heidari, Saeed; Moosavi, Elahesadat; Mohamadi, Marzieh
2017-04-01
Transcutaneous electrical nerve stimulation (TENS) is a widely-practiced method to increase blood flow in clinical practice. The best location for stimulation to achieve optimal blood flow has not yet been determined. We compared the effect of TENS application at sympathetic ganglions and acupuncture points on blood flow in the foot of healthy individuals. Seventy-five healthy individuals were randomly assigned to three groups. The first group received cutaneous electrical stimulation at the thoracolumbar sympathetic ganglions. The second group received stimulation at acupuncture points. The third group received stimulation in the mid-calf area as a control group. Blood flow was recorded at time zero as baseline and every 3 minutes after baseline during stimulation, with a laser Doppler flow-meter. Individuals who received sympathetic ganglion stimulation showed significantly greater blood flow than those receiving acupuncture point stimulation or those in the control group (p<0.001). Data analysis revealed that blood flow at different times during stimulation increased significantly from time zero in each group. Therefore, the application of low-frequency TENS at the thoracolumbar sympathetic ganglions was more effective in increasing peripheral blood circulation than stimulation at acupuncture points. Copyright © 2017 Medical Association of Pharmacopuncture Institute. Published by Elsevier B.V. All rights reserved.
A Semiparametric Change-Point Regression Model for Longitudinal Observations.
Xing, Haipeng; Ying, Zhiliang
2012-12-01
Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.
NASA Astrophysics Data System (ADS)
Vasefi, Fartash; Kittle, David S.; Nie, Zhaojun; Falcone, Christina; Patil, Chirag G.; Chu, Ray M.; Mamelak, Adam N.; Black, Keith L.; Butte, Pramod V.
2016-04-01
We have developed and tested a system for real-time intra-operative optical identification and classification of brain tissues using time-resolved fluorescence spectroscopy (TRFS). A supervised learning algorithm using linear discriminant analysis (LDA) employing selected intrinsic fluorescence decay temporal points in 6 spectral bands was employed to maximize statistical significance difference between training groups. The linear discriminant analysis on in vivo human tissues obtained by TRFS measurements (N = 35) were validated by histopathologic analysis and neuronavigation correlation to pre-operative MRI images. These results demonstrate that TRFS can differentiate between normal cortex, white matter and glioma.
Goldfield, Eugene C; Buonomo, Carlo; Fletcher, Kara; Perez, Jennifer; Margetts, Stacey; Hansen, Anne; Smith, Vincent; Ringer, Steven; Richardson, Michael J; Wolff, Peter H
2010-04-01
Coordination between movements of individual tongue points, and between soft palate elevation and tongue movements, were examined in 12 prematurely born infants referred from hospital NICUs for videofluoroscopic swallow study (VFSS) due to poor oral feeding and suspicion of aspiration. Detailed post-evaluation kinematic analysis was conducted by digitizing images of a lateral view of digitally superimposed points on the tongue and soft palate. The primary measure of coordination was continuous relative phase of the time series created by movements of points on the tongue and soft palate over successive frames. Three points on the tongue (anterior, medial, and posterior) were organized around a stable in-phase pattern, with a phase lag that implied an anterior to posterior direction of motion. Coordination between a tongue point and a point on the soft palate during lowering and elevation was close to anti-phase at initiation of the pharyngeal swallow. These findings suggest that anti-phase coordination between tongue and soft palate may reflect the process by which the tongue is timed to pump liquid by moving it into an enclosed space, compressing it, and allowing it to leave by a specific route through the pharynx. Copyright 2009 Elsevier Inc. All rights reserved.
A macrochip interconnection network enabled by silicon nanophotonic devices.
Zheng, Xuezhe; Cunningham, John E; Koka, Pranay; Schwetman, Herb; Lexau, Jon; Ho, Ron; Shubin, Ivan; Krishnamoorthy, Ashok V; Yao, Jin; Mekis, Attila; Pinguet, Thierry
2010-03-01
We present an advanced wavelength-division multiplexing point-to-point network enabled by silicon nanophotonic devices. This network offers strictly non-blocking all-to-all connectivity while maximizing bisection bandwidth, making it ideal for multi-core and multi-processor interconnections. We introduce one of the key components, the nanophotonic grating coupler, and discuss, for the first time, how this device can be useful for practical implementations of the wavelength-division multiplexing network using optical proximity communications. Finite difference time-domain simulation of the nanophotonic grating coupler device indicates that it can be made compact (20 microm x 50 microm), low loss (3.8 dB), and broadband (100 nm). These couplers require subwavelength material modulation at the nanoscale to achieve the desired functionality. We show that optical proximity communication provides unmatched optical I/O bandwidth density to electrical chips, which enables the application of wavelength-division multiplexing point-to-point network in macrochip with unprecedented bandwidth-density. The envisioned physical implementation is discussed. The benefits of such an interconnect network include a 5-6x improvement in latency when compared to a purely electronic implementation. Performance analysis shows that the wavelength-division multiplexing point-to-point network offers better overall performance over other optical network architectures.
NASA Astrophysics Data System (ADS)
Huerta, Margarita
This quantitative study explored the impact of literacy integration in a science inquiry classroom involving the use of science notebooks on the academic language development and conceptual understanding of students from diverse (i.e., English Language Learners, or ELLs) and low socio-economic status (low-SES) backgrounds. The study derived from a randomized, longitudinal, field-based NSF funded research project (NSF Award No. DRL - 0822343) targeting ELL and non-ELL students from low-SES backgrounds in a large urban school district in Southeast Texas. The study used a scoring rubric (modified and tested for validity and reliability) to analyze fifth-grade school students' science notebook entries. Scores for academic language quality (or, for brevity, language ) were used to compare language growth over time across three time points (i.e., beginning, middle, and end of the school year) and to compare students across categories (ELL, former ELL, non-ELL, and gender) using descriptive statistics and mixed between-within subjects analysis of variance (ANOVA). Scores for conceptual understanding (or, for brevity, concept) were used to compare students across categories (ELL, former ELL, non-ELL, and gender) in three domains using descriptive statistics and ANOVA. A correlational analysis was conducted to explore the relationship, if any, between language scores and concept scores for each group. Students demonstrated statistically significant growth over time in their academic language as reflected by science notebook scores. While ELL students scored lower than former ELL and non-ELL students at the first two time points, they caught up to their peers by the third time point. Similarly, females outperformed males in language scores in the first two time points, but males caught up to females in the third time point. In analyzing conceptual scores, ELLs had statistically significant lower scores than former-ELL and non-ELL students, and females outperformed males in the first two domains. These differences, however, were not statistically significant in the last domain. Last, correlations between language and concept scores were overall, positive, large, and significant across domains and groups. The study presents a rubric useful for quantifying diverse students' science notebook entries, and findings add to the sparse research on the impact of writing in diverse students' language development and conceptual understanding in science.
Time-Resolved Transposon Insertion Sequencing Reveals Genome-Wide Fitness Dynamics during Infection.
Yang, Guanhua; Billings, Gabriel; Hubbard, Troy P; Park, Joseph S; Yin Leung, Ka; Liu, Qin; Davis, Brigid M; Zhang, Yuanxing; Wang, Qiyao; Waldor, Matthew K
2017-10-03
Transposon insertion sequencing (TIS) is a powerful high-throughput genetic technique that is transforming functional genomics in prokaryotes, because it enables genome-wide mapping of the determinants of fitness. However, current approaches for analyzing TIS data assume that selective pressures are constant over time and thus do not yield information regarding changes in the genetic requirements for growth in dynamic environments (e.g., during infection). Here, we describe structured analysis of TIS data collected as a time series, termed pattern analysis of conditional essentiality (PACE). From a temporal series of TIS data, PACE derives a quantitative assessment of each mutant's fitness over the course of an experiment and identifies mutants with related fitness profiles. In so doing, PACE circumvents major limitations of existing methodologies, specifically the need for artificial effect size thresholds and enumeration of bacterial population expansion. We used PACE to analyze TIS samples of Edwardsiella piscicida (a fish pathogen) collected over a 2-week infection period from a natural host (the flatfish turbot). PACE uncovered more genes that affect E. piscicida 's fitness in vivo than were detected using a cutoff at a terminal sampling point, and it identified subpopulations of mutants with distinct fitness profiles, one of which informed the design of new live vaccine candidates. Overall, PACE enables efficient mining of time series TIS data and enhances the power and sensitivity of TIS-based analyses. IMPORTANCE Transposon insertion sequencing (TIS) enables genome-wide mapping of the genetic determinants of fitness, typically based on observations at a single sampling point. Here, we move beyond analysis of endpoint TIS data to create a framework for analysis of time series TIS data, termed pattern analysis of conditional essentiality (PACE). We applied PACE to identify genes that contribute to colonization of a natural host by the fish pathogen Edwardsiella piscicida. PACE uncovered more genes that affect E. piscicida 's fitness in vivo than were detected using a terminal sampling point, and its clustering of mutants with related fitness profiles informed design of new live vaccine candidates. PACE yields insights into patterns of fitness dynamics and circumvents major limitations of existing methodologies. Finally, the PACE method should be applicable to additional "omic" time series data, including screens based on clustered regularly interspaced short palindromic repeats with Cas9 (CRISPR/Cas9). Copyright © 2017 Yang et al.
Understanding survival analysis: Kaplan-Meier estimate.
Goel, Manish Kumar; Khanna, Pardeep; Kishore, Jugal
2010-10-01
Kaplan-Meier estimate is one of the best options to be used to measure the fraction of subjects living for a certain amount of time after treatment. In clinical trials or community trials, the effect of an intervention is assessed by measuring the number of subjects survived or saved after that intervention over a period of time. The time starting from a defined point to the occurrence of a given event, for example death is called as survival time and the analysis of group data as survival analysis. This can be affected by subjects under study that are uncooperative and refused to be remained in the study or when some of the subjects may not experience the event or death before the end of the study, although they would have experienced or died if observation continued, or we lose touch with them midway in the study. We label these situations as censored observations. The Kaplan-Meier estimate is the simplest way of computing the survival over time in spite of all these difficulties associated with subjects or situations. The survival curve can be created assuming various situations. It involves computing of probabilities of occurrence of event at a certain point of time and multiplying these successive probabilities by any earlier computed probabilities to get the final estimate. This can be calculated for two groups of subjects and also their statistical difference in the survivals. This can be used in Ayurveda research when they are comparing two drugs and looking for survival of subjects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo, Raymond; Celius, Trine; Forgacs, Agnes L.
2011-11-15
Genome-wide, promoter-focused ChIP-chip analysis of hepatic aryl hydrocarbon receptor (AHR) binding sites was conducted in 8-week old female C57BL/6 treated with 30 {mu}g/kg/body weight 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) for 2 h and 24 h. These studies identified 1642 and 508 AHR-bound regions at 2 h and 24 h, respectively. A total of 430 AHR-bound regions were common between the two time points, corresponding to 403 unique genes. Comparison with previous AHR ChIP-chip studies in mouse hepatoma cells revealed that only 62 of the putative target genes overlapped with the 2 h AHR-bound regions in vivo. Transcription factor binding site analysis revealed anmore » over-representation of aryl hydrocarbon response elements (AHREs) in AHR-bound regions with 53% (2 h) and 68% (24 h) of them containing at least one AHRE. In addition to AHREs, E2f-Myc activator motifs previously implicated in AHR function, as well as a number of other motifs, including Sp1, nuclear receptor subfamily 2 factor, and early growth response factor motifs were also identified. Expression microarray studies identified 133 unique genes differentially regulated after 4 h treatment with TCDD. Of which, 39 were identified as AHR-bound genes at 2 h. Ingenuity Pathway Analysis on the 39 AHR-bound TCDD responsive genes identified potential perturbation in biological processes such as lipid metabolism, drug metabolism, and endocrine system development as a result of TCDD-mediated AHR activation. Our findings identify direct AHR target genes in vivo, highlight in vitro and in vivo differences in AHR signaling and show that AHR recruitment does not necessarily result in changes in target gene expression. -- Highlights: Black-Right-Pointing-Pointer ChIP-chip analysis of hepatic AHR binding after 2 h and 24 h of TCDD. Black-Right-Pointing-Pointer We identified 1642 and 508 AHR-bound regions at 2 h and 24 h. Black-Right-Pointing-Pointer 430 regions were common to both time points and highly enriched with AHREs. Black-Right-Pointing-Pointer Only 62 putative target regions overlapped AHR-bound regions in hepatoma cells. Black-Right-Pointing-Pointer Microarrays identified 133 TCDD-regulated genes; of which 39 were also bound by AHR.« less
A simple and fast representation space for classifying complex time series
NASA Astrophysics Data System (ADS)
Zunino, Luciano; Olivares, Felipe; Bariviera, Aurelio F.; Rosso, Osvaldo A.
2017-03-01
In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease.
Extending nonlinear analysis to short ecological time series.
Hsieh, Chih-hao; Anderson, Christian; Sugihara, George
2008-01-01
Nonlinearity is important and ubiquitous in ecology. Though detectable in principle, nonlinear behavior is often difficult to characterize, analyze, and incorporate mechanistically into models of ecosystem function. One obvious reason is that quantitative nonlinear analysis tools are data intensive (require long time series), and time series in ecology are generally short. Here we demonstrate a useful method that circumvents data limitation and reduces sampling error by combining ecologically similar multispecies time series into one long time series. With this technique, individual ecological time series containing as few as 20 data points can be mined for such important information as (1) significantly improved forecast ability, (2) the presence and location of nonlinearity, and (3) the effective dimensionality (the number of relevant variables) of an ecological system.
The Use of Time Series Analysis and t Tests with Serially Correlated Data Tests.
ERIC Educational Resources Information Center
Nicolich, Mark J.; Weinstein, Carol S.
1981-01-01
Results of three methods of analysis applied to simulated autocorrelated data sets with an intervention point (varying in autocorrelation degree, variance of error term, and magnitude of intervention effect) are compared and presented. The three methods are: t tests; maximum likelihood Box-Jenkins (ARIMA); and Bayesian Box Jenkins. (Author/AEF)
Fight or Flight: An Account of a Professor's First Year Obsession
ERIC Educational Resources Information Center
León, Raina J.
2014-01-01
In this article, a junior faculty member explores her obsessions with the distribution of time in the areas of teaching, scholarship, service and personal life through an intensive analysis of an academic calendar, populated with data points in those areas. Through this analysis, she examines her first year and her own development as an academic.
Analysis of high-resolution foreign exchange data of USD-JPY for 13 years
NASA Astrophysics Data System (ADS)
Mizuno, Takayuki; Kurihara, Shoko; Takayasu, Misako; Takayasu, Hideki
2003-06-01
We analyze high-resolution foreign exchange data consisting of 20 million data points of USD-JPY for 13 years to report firm statistical laws in distributions and correlations of exchange rate fluctuations. A conditional probability density analysis clearly shows the existence of trend-following movements at time scale of 8-ticks, about 1 min.
EOS-AM precision pointing verification
NASA Technical Reports Server (NTRS)
Throckmorton, A.; Braknis, E.; Bolek, J.
1993-01-01
The Earth Observing System (EOS) AM mission requires tight pointing knowledge to meet scientific objectives, in a spacecraft with low frequency flexible appendage modes. As the spacecraft controller reacts to various disturbance sources and as the inherent appendage modes are excited by this control action, verification of precision pointing knowledge becomes particularly challenging for the EOS-AM mission. As presently conceived, this verification includes a complementary set of multi-disciplinary analyses, hardware tests and real-time computer in the loop simulations, followed by collection and analysis of hardware test and flight data and supported by a comprehensive data base repository for validated program values.
Zero-moment point determination of worst-case manoeuvres leading to vehicle wheel lift
NASA Astrophysics Data System (ADS)
Lapapong, S.; Brown, A. A.; Swanson, K. S.; Brennan, S. N.
2012-01-01
This paper proposes a method to evaluate vehicle rollover propensity based on a frequency-domain representation of the zero-moment point (ZMP). Unlike other rollover metrics such as the static stability factor, which is based on the steady-state behaviour, and the load transfer ratio, which requires the calculation of tyre forces, the ZMP is based on a simplified kinematic model of the vehicle and the analysis of the contact point of the vehicle relative to the edge of the support polygon. Previous work has validated the use of the ZMP experimentally in its ability to predict wheel lift in the time domain. This work explores the use of the ZMP in the frequency domain to allow a chassis designer to understand how operating conditions and vehicle parameters affect rollover propensity. The ZMP analysis is then extended to calculate worst-case sinusoidal manoeuvres that lead to untripped wheel lift, and the analysis is tested across several vehicle configurations and compared with that of the standard Toyota J manoeuvre.
Sampled control stability of the ESA instrument pointing system
NASA Astrophysics Data System (ADS)
Thieme, G.; Rogers, P.; Sciacovelli, D.
Stability analysis and simulation results are presented for the ESA Instrument Pointing System (IPS) that is to be used in Spacelab's second launch. Of the two IPS plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and its payload, while the other follows the NASA practice of defining an IPS-Spacelab 2 plant configuration through a structural finite element model, which is then used to generate modal data for various pointing directions. In both cases, the IPS dynamic plant model is truncated, then discretized at the sampling frequency and interfaces to a PID-based control law. A stability analysis has been carried out in discrete domain for various instrument pointing directions, taking into account suitable parameter variation ranges. A number of time simulations are presented.
NASA Astrophysics Data System (ADS)
Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro
2017-07-01
Numerical analysis on the rotation of an ultrasonically levitated droplet in centrifugal coordinate is discussed. A droplet levitated in an acoustic chamber is simulated using the distributed point source method and the moving particle semi-implicit method. Centrifugal coordinate is adopted to avoid the Laplacian differential error, which causes numerical divergence or inaccuracy in the global coordinate calculation. Consequently, the duration of calculation stability has increased 30 times longer than that in a the previous paper. Moreover, the droplet radius versus rotational acceleration characteristics show a similar trend to the theoretical and experimental values in the literature.
High School Grade Inflation from 2004 to 2011. ACT Research Report Series, 2013 (3)
ERIC Educational Resources Information Center
Zhang, Qian; Sanchez, Edgar I.
2013-01-01
This study explores inflation in high school grade point average (HSGPA), defined as trend over time in the conditional average of HSGPA, given ACT® Composite score. The time period considered is 2004 to 2011. Using hierarchical linear modeling, the study updates a previous analysis of Woodruff and Ziomek (2004). The study also investigates…
Values in Prime Time Alcoholic Beverage Commercials.
ERIC Educational Resources Information Center
Frazer, Charles F.
Content analysis was used to study the values evident in televised beer and wine commercials. Seventy-seven prime time commercials, 7.6% of a week's total, were analyzed along value dimensions adapted from Gallup's measure of popular social values. The intensity of each value was coded on a five-point scale. None of the commercials in the beer and…
Who Stays and for How Long: Examining Attrition in Canadian Graduate Programs
ERIC Educational Resources Information Center
DeClou, Lindsay
2016-01-01
Attrition from Canadian graduate programs is a point of concern on a societal, institutional, and individual level. To improve retention in graduate school, a better understanding of what leads to withdrawal needs to be reached. This paper uses logistic regression and discrete-time survival analysis with time-varying covariates to analyze data…
A model for incomplete longitudinal multivariate ordinal data.
Liu, Li C
2008-12-30
In studies where multiple outcome items are repeatedly measured over time, missing data often occur. A longitudinal item response theory model is proposed for analysis of multivariate ordinal outcomes that are repeatedly measured. Under the MAR assumption, this model accommodates missing data at any level (missing item at any time point and/or missing time point). It allows for multiple random subject effects and the estimation of item discrimination parameters for the multiple outcome items. The covariates in the model can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is described utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher-scoring solution, which provides standard errors for all model parameters, is used. A data set from a longitudinal prevention study is used to motivate the application of the proposed model. In this study, multiple ordinal items of health behavior are repeatedly measured over time. Because of a planned missing design, subjects answered only two-third of all items at a given point. Copyright 2008 John Wiley & Sons, Ltd.
Einsiedel, T.; Freund, W.; Sander, S.; Trnavac, S.; Gebhard, F.
2008-01-01
The aim of this study was to investigate whether the final displacement of conservatively treated distal radius fractures can be predicted after primary reduction. We analysed the radiographic documents of 311 patients with a conservatively treated distal radius fracture at the time of injury, after reduction and after bony consolidation. We measured the dorsal angulation (DA), the radial angle (RA) and the radial shortening (RS) at each time point. The parameters were analysed separately for metaphyseally “stable” (A2, C1) and “unstable” (A3, C2, C3) fractures, according to the AO classification system. Spearman’s rank correlations and regression functions were determined for the analysis. The highest correlations were found for the DA between the time points ‘reduction’ and ‘complete healing’ (r = 0.75) and for the RA between the time points ‘reduction’ and ‘complete healing’ (r = 0.80). The DA and the RA after complete healing can be predicted from the regression functions. PMID:18504577
Dynamic Analysis of a Reaction-Diffusion Rumor Propagation Model
NASA Astrophysics Data System (ADS)
Zhao, Hongyong; Zhu, Linhe
2016-06-01
The rapid development of the Internet, especially the emergence of the social networks, leads rumor propagation into a new media era. Rumor propagation in social networks has brought new challenges to network security and social stability. This paper, based on partial differential equations (PDEs), proposes a new SIS rumor propagation model by considering the effect of the communication between the different rumor infected users on rumor propagation. The stabilities of a nonrumor equilibrium point and a rumor-spreading equilibrium point are discussed by linearization technique and the upper and lower solutions method, and the existence of a traveling wave solution is established by the cross-iteration scheme accompanied by the technique of upper and lower solutions and Schauder’s fixed point theorem. Furthermore, we add the time delay to rumor propagation and deduce the conditions of Hopf bifurcation and stability switches for the rumor-spreading equilibrium point by taking the time delay as the bifurcation parameter. Finally, numerical simulations are performed to illustrate the theoretical results.
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
Cosmic infinity: a dynamical system approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouhmadi-López, Mariam; Marto, João; Morais, João
2017-03-01
Dynamical system techniques are extremely useful to study cosmology. It turns out that in most of the cases, we deal with finite isolated fixed points corresponding to a given cosmological epoch. However, it is equally important to analyse the asymptotic behaviour of the universe. On this paper, we show how this can be carried out for 3-form models. In fact, we show that there are fixed points at infinity mainly by introducing appropriate compactifications and defining a new time variable that washes away any potential divergence of the system. The richness of 3-form models allows us as well to identifymore » normally hyperbolic non-isolated fixed points. We apply this analysis to three physically interesting situations: (i) a pre-inflationary era; (ii) an inflationary era; (iii) the late-time dark matter/dark energy epoch.« less
Efficient use of single molecule time traces to resolve kinetic rates, models and uncertainties
NASA Astrophysics Data System (ADS)
Schmid, Sonja; Hugel, Thorsten
2018-03-01
Single molecule time traces reveal the time evolution of unsynchronized kinetic systems. Especially single molecule Förster resonance energy transfer (smFRET) provides access to enzymatically important time scales, combined with molecular distance resolution and minimal interference with the sample. Yet the kinetic analysis of smFRET time traces is complicated by experimental shortcomings—such as photo-bleaching and noise. Here we recapitulate the fundamental limits of single molecule fluorescence that render the classic, dwell-time based kinetic analysis unsuitable. In contrast, our Single Molecule Analysis of Complex Kinetic Sequences (SMACKS) considers every data point and combines the information of many short traces in one global kinetic rate model. We demonstrate the potential of SMACKS by resolving the small kinetic effects caused by different ionic strengths in the chaperone protein Hsp90. These results show an unexpected interrelation between conformational dynamics and ATPase activity in Hsp90.
Nigg, Claudio R; Motl, Robert W; Horwath, Caroline; Dishman, Rod K
2012-01-01
Objectives Physical activity (PA) research applying the Transtheoretical Model (TTM) to examine group differences and/or change over time requires preliminary evidence of factorial validity and invariance. The current study examined the factorial validity and longitudinal invariance of TTM constructs recently revised for PA. Method Participants from an ethnically diverse sample in Hawaii (N=700) completed questionnaires capturing each TTM construct. Results Factorial validity was confirmed for each construct using confirmatory factor analysis with full-information maximum likelihood. Longitudinal invariance was evidenced across a shorter (3-month) and longer (6-month) time period via nested model comparisons. Conclusions The questionnaires for each validated TTM construct are provided, and can now be generalized across similar subgroups and time points. Further validation of the provided measures is suggested in additional populations and across extended time points. PMID:22778669
ERIC Educational Resources Information Center
Hannan, Michael T.
This technical document, part of a series of chapters described in SO 011 759, describes a basic model of panel analysis used in a study of the causes of institutional and structural change in nations. Panel analysis is defined as a record of state occupancy of a sample of units at two or more points in time; for example, voters disclose voting…
Spontaneous Fluctuations in Sensory Processing Predict Within-Subject Reaction Time Variability.
Ribeiro, Maria J; Paiva, Joana S; Castelo-Branco, Miguel
2016-01-01
When engaged in a repetitive task our performance fluctuates from trial-to-trial. In particular, inter-trial reaction time variability has been the subject of considerable research. It has been claimed to be a strong biomarker of attention deficits, increases with frontal dysfunction, and predicts age-related cognitive decline. Thus, rather than being just a consequence of noise in the system, it appears to be under the control of a mechanism that breaks down under certain pathological conditions. Although the underlying mechanism is still an open question, consensual hypotheses are emerging regarding the neural correlates of reaction time inter-trial intra-individual variability. Sensory processing, in particular, has been shown to covary with reaction time, yet the spatio-temporal profile of the moment-to-moment variability in sensory processing is still poorly characterized. The goal of this study was to characterize the intra-individual variability in the time course of single-trial visual evoked potentials and its relationship with inter-trial reaction time variability. For this, we chose to take advantage of the high temporal resolution of the electroencephalogram (EEG) acquired while participants were engaged in a 2-choice reaction time task. We studied the link between single trial event-related potentials (ERPs) and reaction time using two different analyses: (1) time point by time point correlation analyses thereby identifying time windows of interest; and (2) correlation analyses between single trial measures of peak latency and amplitude and reaction time. To improve extraction of single trial ERP measures related with activation of the visual cortex, we used an independent component analysis (ICA) procedure. Our ERP analysis revealed a relationship between the N1 visual evoked potential and reaction time. The earliest time point presenting a significant correlation of its respective amplitude with reaction time occurred 175 ms after stimulus onset, just after the onset of the N1 peak. Interestingly, single trial N1 latency correlated significantly with reaction time, while N1 amplitude did not. In conclusion, our findings suggest that inter-trial variability in the timing of extrastriate visual processing contributes to reaction time variability.
Spontaneous Fluctuations in Sensory Processing Predict Within-Subject Reaction Time Variability
Ribeiro, Maria J.; Paiva, Joana S.; Castelo-Branco, Miguel
2016-01-01
When engaged in a repetitive task our performance fluctuates from trial-to-trial. In particular, inter-trial reaction time variability has been the subject of considerable research. It has been claimed to be a strong biomarker of attention deficits, increases with frontal dysfunction, and predicts age-related cognitive decline. Thus, rather than being just a consequence of noise in the system, it appears to be under the control of a mechanism that breaks down under certain pathological conditions. Although the underlying mechanism is still an open question, consensual hypotheses are emerging regarding the neural correlates of reaction time inter-trial intra-individual variability. Sensory processing, in particular, has been shown to covary with reaction time, yet the spatio-temporal profile of the moment-to-moment variability in sensory processing is still poorly characterized. The goal of this study was to characterize the intra-individual variability in the time course of single-trial visual evoked potentials and its relationship with inter-trial reaction time variability. For this, we chose to take advantage of the high temporal resolution of the electroencephalogram (EEG) acquired while participants were engaged in a 2-choice reaction time task. We studied the link between single trial event-related potentials (ERPs) and reaction time using two different analyses: (1) time point by time point correlation analyses thereby identifying time windows of interest; and (2) correlation analyses between single trial measures of peak latency and amplitude and reaction time. To improve extraction of single trial ERP measures related with activation of the visual cortex, we used an independent component analysis (ICA) procedure. Our ERP analysis revealed a relationship between the N1 visual evoked potential and reaction time. The earliest time point presenting a significant correlation of its respective amplitude with reaction time occurred 175 ms after stimulus onset, just after the onset of the N1 peak. Interestingly, single trial N1 latency correlated significantly with reaction time, while N1 amplitude did not. In conclusion, our findings suggest that inter-trial variability in the timing of extrastriate visual processing contributes to reaction time variability. PMID:27242470
AN OPTIMIZED 64X64 POINT TWO-DIMENSIONAL FAST FOURIER TRANSFORM
NASA Technical Reports Server (NTRS)
Miko, J.
1994-01-01
Scientists at Goddard have developed an efficient and powerful program-- An Optimized 64x64 Point Two-Dimensional Fast Fourier Transform-- which combines the performance of real and complex valued one-dimensional Fast Fourier Transforms (FFT's) to execute a two-dimensional FFT and its power spectrum coefficients. These coefficients can be used in many applications, including spectrum analysis, convolution, digital filtering, image processing, and data compression. The program's efficiency results from its technique of expanding all arithmetic operations within one 64-point FFT; its high processing rate results from its operation on a high-speed digital signal processor. For non-real-time analysis, the program requires as input an ASCII data file of 64x64 (4096) real valued data points. As output, this analysis produces an ASCII data file of 64x64 power spectrum coefficients. To generate these coefficients, the program employs a row-column decomposition technique. First, it performs a radix-4 one-dimensional FFT on each row of input, producing complex valued results. Then, it performs a one-dimensional FFT on each column of these results to produce complex valued two-dimensional FFT results. Finally, the program sums the squares of the real and imaginary values to generate the power spectrum coefficients. The program requires a Banshee accelerator board with 128K bytes of memory from Atlanta Signal Processors (404/892-7265) installed on an IBM PC/AT compatible computer (DOS ver. 3.0 or higher) with at least one 16-bit expansion slot. For real-time operation, an ASPI daughter board is also needed. The real-time configuration reads 16-bit integer input data directly into the accelerator board, operating on 64x64 point frames of data. The program's memory management also allows accumulation of the coefficient results. The real-time processing rate to calculate and accumulate the 64x64 power spectrum output coefficients is less than 17.0 mSec. Documentation is included in the price of the program. Source code is written in C, 8086 Assembly, and Texas Instruments TMS320C30 Assembly Languages. This program is available on a 5.25 inch 360K MS-DOS format diskette. IBM and IBM PC are registered trademarks of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation.
Non-reciprocity in nonlinear elastodynamics
NASA Astrophysics Data System (ADS)
Blanchard, Antoine; Sapsis, Themistoklis P.; Vakakis, Alexander F.
2018-01-01
Reciprocity is a fundamental property of linear time-invariant (LTI) acoustic waveguides governed by self-adjoint operators with symmetric Green's functions. The break of reciprocity in LTI elastodynamics is only possible through the break of time reversal symmetry on the micro-level, and this can be achieved by imposing external biases, adding nonlinearities or allowing for time-varying system properties. We present a Volterra-series based asymptotic analysis for studying spatial non-reciprocity in a class of one-dimensional (1D), time-invariant elastic systems with weak stiffness nonlinearities. We show that nonlinearity is neither necessary nor sufficient for breaking reciprocity in this class of systems; rather, it depends on the boundary conditions, the symmetries of the governing linear and nonlinear operators, and the choice of the spatial points where the non-reciprocity criterion is tested. Extension of the analysis to higher dimensions and time-varying systems is straightforward from a mathematical point of view (but not in terms of new non-reciprocal physical phenomena), whereas the connection of non-reciprocity and time irreversibility can be studied as well. Finally, we show that suitably defined non-reciprocity measures enable optimization, and can provide physical understanding of the nonlinear effects in the dynamics, enabling one to establish regimes of "maximum nonlinearity." We highlight the theoretical developments by means of a numerical example.
Topical nasal decongestant oxymetazoline (0.05%) provides relief of nasal symptoms for 12 hours.
Druce, H M; Ramsey, D L; Karnati, S; Carr, A N
2018-05-22
Nasal congestion, often referred to as stuffy nose or blocked nose is one of the most prevalent and bothersome symptoms of an upper respiratory tract infection. Oxymetazoline, a widely used intranasal decongestant, offers fast symptom relief, but little is known about the duration of effect. The results of 2 randomized, double-blind, vehicle-controlled, single-dose, parallel, clinical studies (Study 1, n=67; Study 2, n=61) in which the efficacy of an oxymetazoline (0.05% Oxy) nasal spray in patients with acute coryzal rhinitis was assessed over a 12-hour time-period. Data were collected on both subjective relief of nasal congestion (6-point nasal congestion scale) and objective measures of nasal patency (anterior rhinomanometry) in both studies. A pooled study analysis showed statistically significant changes from baseline in subjective nasal congestion for 0.05% oxymetazoline and vehicle at each hourly time-point from Hour 1 through Hour 12 (marginally significant at Hour 11). An objective measure of nasal flow was statistically significant at each time-point up to 12 hours. Adverse events on either treatment were infrequent. The number of subjects who achieved an improvement in subjective nasal congestion scores of at least 1.0 was significantly higher in the Oxy group vs. vehicle at all hourly time-points on a 6-point nasal congestion scale. This study shows for the first time, that oxymetazoline provides both statistically significant and clinically meaningful relief of nasal congestion and improves nasal airflow for up to 12 hours following a single dose.
NASA Astrophysics Data System (ADS)
McCrea, Terry
The Shuttle Processing Contract (SPC) workforce consists of Lockheed Space Operations Co. as prime contractor, with Grumman, Thiokol Corporation, and Johnson Controls World Services as subcontractors. During the design phase, reliability engineering is instrumental in influencing the development of systems that meet the Shuttle fail-safe program requirements. Reliability engineers accomplish this objective by performing FMEA (failure modes and effects analysis) to identify potential single failure points. When technology, time, or resources do not permit a redesign to eliminate a single failure point, the single failure point information is formatted into a change request and presented to senior management of SPC and NASA for risk acceptance. In parallel with the FMEA, safety engineering conducts a hazard analysis to assure that potential hazards to personnel are assessed. The combined effort (FMEA and hazard analysis) is published as a system assurance analysis. Special ground rules and techniques are developed to perform and present the analysis. The reliability program at KSC is vigorously pursued, and has been extremely successful. The ground support equipment and facilities used to launch and land the Space Shuttle maintain an excellent reliability record.
Wardley, C Sonia; Applegate, E Brooks; Almaleki, A Deyab; Van Rhee, James A
2016-03-01
A 6-year longitudinal study was conducted to compare the perceived stress experienced during a 2-year master's physician assistant program by 5 cohorts of students enrolled in either problem-based learning (PBL) or lecture-based learning (LBL) curricular tracks. The association of perceived stress with academic achievement was also assessed. Students rated their stress levels on visual analog scales in relation to family obligations, financial concerns, schoolwork, and relocation and overall on 6 occasions throughout the program. A mixed model analysis of variance examined the students' perceived level of stress by curriculum and over time. Regression analysis further examined school work-related stress after controlling for other stressors and possible lag effect of stress from the previous time point. Students reported that overall stress increased throughout the didactic year followed by a decline in the clinical year with statistically significant curricular (PBL versus LBL) and time differences. PBL students also reported significantly more stress resulting from school work than LBL students at some time points. Moreover, when the other measured stressors and possible lag effects were controlled, significant differences between PBL and LBL students' perceived stress related to school work persisted at the 8- and 12-month measurement points. Increased stress in both curricula was associated with higher achievement in overall and individual organ system examination scores. Physician assistant programs that embrace a PBL pedagogy to prepare students to think clinically may need to provide students with additional support through the didactic curriculum.
Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model
NASA Astrophysics Data System (ADS)
Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.
2009-05-01
Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.
Data simulation for the Lightning Imaging Sensor (LIS)
NASA Technical Reports Server (NTRS)
Boeck, William L.
1991-01-01
This project aims to build a data analysis system that will utilize existing video tape scenes of lightning as viewed from space. The resultant data will be used for the design and development of the Lightning Imaging Sensor (LIS) software and algorithm analysis. The desire for statistically significant metrics implies that a large data set needs to be analyzed. Before 1990 the quality and quantity of video was insufficient to build a usable data set. At this point in time, there is usable data from missions STS-34, STS-32, STS-31, STS-41, STS-37, and STS-39. During the summer of 1990, a manual analysis system was developed to demonstrate that the video analysis is feasible and to identify techniques to deduce information that was not directly available. Because the closed circuit television system used on the space shuttle was intended for documentary TV, the current value of the camera focal length and pointing orientation, which are needed for photoanalysis, are not included in the system data. A large effort was needed to discover ancillary data sources as well as develop indirect methods to estimate the necessary parameters. Any data system coping with full motion video faces an enormous bottleneck produced by the large data production rate and the need to move and store the digitized images. The manual system bypassed the video digitizing bottleneck by using a genlock to superimpose pixel coordinates on full motion video. Because the data set had to be obtained point by point by a human operating a computer mouse, the data output rate was small. The loan and subsequent acquisition of a Abekas digital frame store with a real time digitizer moved the bottleneck from data acquisition to a problem of data transfer and storage. The semi-automated analysis procedure was developed using existing equipment and is described. A fully automated system is described in the hope that the components may come on the market at reasonable prices in the next few years.
Stock, J D; Calderón Díaz, J A; Rothschild, M F; Mote, B E; Stalder, K J
2018-06-09
Feet and legs of replacement females were objectively evaluated at selection, i.e. approximately 150 days of age (n=319) and post first parity, i.e. any time after weaning of first litter and before 2nd parturition (n=277) to 1) compare feet and leg joint angle ranges between selection and post first parity; 2) identify feet and leg joint angle differences between selection and first three weeks of second gestation; 3) identify feet and leg join angle differences between farms and gestation days during second gestation; and 4) obtain genetic variance components for conformation angles for the two time points measured. Angles for carpal joint (knee), metacarpophalangeal joint (front pastern), metatarsophalangeal joint (rear pastern), tarsal joint (hock), and rear stance were measured using image analysis software. Between selection and post first parity significant differences were observed for all joints measured (P < 0.05). Knee, front and rear pastern angles were less (more flexion), and hock angles were greater (less flexion) as age progressed (P < 0.05), while the rear stance pattern was less (feet further under center) at selection than post first parity (only including measures during first three weeks of second gestation). Only using post first parity leg conformation information, farm was a significant source of variation for front and rear pasterns and rear stance angle measurements (P < 0.05). Knee angle was less (more flexion) (P < 0.05) as gestation age progressed. Heritability estimates were low to moderate (0.04 - 0.35) for all traits measured across time points. Genetic correlations between the same joints at different time points were high (> 0.8) between the front leg joints and low (<0.2) between the rear leg joints. High genetic correlations between time points indicate that the trait can be considered the same at either time point, and low genetic correlations indicate that the trait at different time points should be considered as two separate traits. Minimal change in the front leg suggests conformation traits that remain between selection and post first parity, while larger changes in rear leg indicate that rear leg conformation traits should be evaluated at multiple time periods.
Arm to leg coordination in elite butterfly swimmers.
Chollet, D; Seifert, L; Boulesteix, L; Carter, M
2006-04-01
This study proposed the use of four time gaps to assess arm-to-leg coordination in the butterfly stroke at increasing race paces. Fourteen elite male swimmers swam at four velocities corresponding to the appropriate paces for, respectively, the 400-m, 200-m, 100-m, and 50-m events. The different stroke phases of the arm and leg were identified by video analysis and then used to calculate four time gaps (T1: time gap between entry of the hands in the water and the high break-even point of the first undulation; T2: time gap between the beginning of the hands' backward movement and the low break-even point of the first undulation; T3: time gap between the hands' arrival in a vertical plane to the shoulders and the high break-even point of the second undulation; T4: time gap between the hands' release from the water and the low break-even point of the second undulation), the values of which described the changing relationship of arm to leg movements over an entire stroke cycle. With increases in pace, elite swimmers increased the stroke rate, the relative duration of the arm pull, the recovery and the first downward movement of the legs, and decreased the stroke length, the relative duration of the arm catch phase and the body glide with arms forward (measured by T2), until continuity in the propulsive actions was achieved. Whatever the paces, the T1, T3, and T4 values were close to zero and revealed a high degree of synchronisation at key motor points of the arm and leg actions. This new method to assess butterfly coordination could facilitate learning and coaching by situating the place of the leg undulation in relation with the arm stroke.
Economic Efficiency and Investment Timing for Dual Water Systems
NASA Astrophysics Data System (ADS)
Leconte, Robert; Hughes, Trevor C.; Narayanan, Rangesan
1987-10-01
A general methodology to evaluate the economic feasibility of dual water systems is presented. In a first step, a static analysis (evaluation at a single point in time) is developed. The analysis requires the evaluation of consumers' and producer's surpluses from water use and the capital cost of the dual (outdoor) system. The analysis is then extended to a dynamic approach where the water demand increases with time (as a result of a population increase) and where the dual system is allowed to expand. The model determines whether construction of a dual system represents a net benefit, and if so, what is the best time to initiate the system (corresponding to maximization of social welfare). Conditions under which an analytic solution is possible are discussed and results of an application are summarized (including sensitivity to different parameters). The analysis allows identification of key parameters influencing attractiveness of dual water systems.
Svedbom, Axel; Borgström, Fredrik; Hernlund, Emma; Ström, Oskar; Alekna, Vidmantas; Bianchi, Maria Luisa; Clark, Patricia; Curiel, Manuel Díaz; Dimai, Hans Peter; Jürisson, Mikk; Uusküla, Anneli; Lember, Margus; Kallikorm, Riina; Lesnyak, Olga; McCloskey, Eugene; Ershova, Olga; Sanders, Kerrie M; Silverman, Stuart; Tamulaitiene, Marija; Thomas, Thierry; Tosteson, Anna N A; Jönsson, Bengt; Kanis, John A
2018-03-01
The International Costs and Utilities Related to Osteoporotic fractures Study is a multinational observational study set up to describe the costs and quality of life (QoL) consequences of fragility fracture. This paper aims to estimate and compare QoL after hip, vertebral, and distal forearm fracture using time-trade-off (TTO), the EuroQol (EQ) Visual Analogue Scale (EQ-VAS), and the EQ-5D-3L valued using the hypothetical UK value set. Data were collected at four time-points for five QoL point estimates: within 2 weeks after fracture (including pre-fracture recall), and at 4, 12, and 18 months after fracture. Health state utility values (HSUVs) were derived for each fracture type and time-point using the three approaches (TTO, EQ-VAS, EQ-5D-3L). HSUV were used to estimate accumulated QoL loss and QoL multipliers. In total, 1410 patients (505 with hip, 316 with vertebral, and 589 with distal forearm fracture) were eligible for analysis. Across all time-points for the three fracture types, TTO provided the highest HSUVs, whereas EQ-5D-3L consistently provided the lowest HSUVs directly after fracture. Except for 13-18 months after distal forearm fracture, EQ-5D-3L generated lower QoL multipliers than the other two methods, whereas no equally clear pattern was observed between EQ-VAS and TTO. On average, the most marked differences between the three approaches were observed immediately after the fracture. The approach to derive QoL markedly influences the estimated QoL impact of fracture. Therefore the choice of approach may be important for the outcome and interpretation of cost-effectiveness analysis of fracture prevention.
Meta-analysis of chicken--salmonella infection experiments.
Te Pas, Marinus F W; Hulsegge, Ina; Schokker, Dirkjan; Smits, Mari A; Fife, Mark; Zoorob, Rima; Endale, Marie-Laure; Rebel, Johanna M J
2012-04-24
Chicken meat and eggs can be a source of human zoonotic pathogens, especially Salmonella species. These food items contain a potential hazard for humans. Chickens lines differ in susceptibility for Salmonella and can harbor Salmonella pathogens without showing clinical signs of illness. Many investigations including genomic studies have examined the mechanisms how chickens react to infection. Apart from the innate immune response, many physiological mechanisms and pathways are reported to be involved in the chicken host response to Salmonella infection. The objective of this study was to perform a meta-analysis of diverse experiments to identify general and host specific mechanisms to the Salmonella challenge. Diverse chicken lines differing in susceptibility to Salmonella infection were challenged with different Salmonella serovars at several time points. Various tissues were sampled at different time points post-infection, and resulting host transcriptional differences investigated using different microarray platforms. The meta-analysis was performed with the R-package metaMA to create lists of differentially regulated genes. These gene lists showed many similarities for different chicken breeds and tissues, and also for different Salmonella serovars measured at different times post infection. Functional biological analysis of these differentially expressed gene lists revealed several common mechanisms for the chicken host response to Salmonella infection. The meta-analysis-specific genes (i.e. genes found differentially expressed only in the meta-analysis) confirmed and expanded the biological functional mechanisms. The meta-analysis combination of heterogeneous expression profiling data provided useful insights into the common metabolic pathways and functions of different chicken lines infected with different Salmonella serovars.
Meta-analysis of Chicken – Salmonella infection experiments
2012-01-01
Background Chicken meat and eggs can be a source of human zoonotic pathogens, especially Salmonella species. These food items contain a potential hazard for humans. Chickens lines differ in susceptibility for Salmonella and can harbor Salmonella pathogens without showing clinical signs of illness. Many investigations including genomic studies have examined the mechanisms how chickens react to infection. Apart from the innate immune response, many physiological mechanisms and pathways are reported to be involved in the chicken host response to Salmonella infection. The objective of this study was to perform a meta-analysis of diverse experiments to identify general and host specific mechanisms to the Salmonella challenge. Results Diverse chicken lines differing in susceptibility to Salmonella infection were challenged with different Salmonella serovars at several time points. Various tissues were sampled at different time points post-infection, and resulting host transcriptional differences investigated using different microarray platforms. The meta-analysis was performed with the R-package metaMA to create lists of differentially regulated genes. These gene lists showed many similarities for different chicken breeds and tissues, and also for different Salmonella serovars measured at different times post infection. Functional biological analysis of these differentially expressed gene lists revealed several common mechanisms for the chicken host response to Salmonella infection. The meta-analysis-specific genes (i.e. genes found differentially expressed only in the meta-analysis) confirmed and expanded the biological functional mechanisms. Conclusions The meta-analysis combination of heterogeneous expression profiling data provided useful insights into the common metabolic pathways and functions of different chicken lines infected with different Salmonella serovars. PMID:22531008
Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif
2017-05-01
Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.
Computerized breast parenchymal analysis on DCE-MRI
NASA Astrophysics Data System (ADS)
Li, Hui; Giger, Maryellen L.; Yuan, Yading; Jansen, Sanaz A.; Lan, Li; Bhooshan, Neha; Newstead, Gillian M.
2009-02-01
Breast density has been shown to be associated with the risk of developing breast cancer, and MRI has been recommended for high-risk women screening, however, it is still unknown how the breast parenchymal enhancement on DCE-MRI is associated with breast density and breast cancer risk. Ninety-two DCE-MRI exams of asymptomatic women with normal MR findings were included in this study. The 3D breast volume was automatically segmented using a volume-growing based algorithm. The extracted breast volume was classified into fibroglandular and fatty regions based on the discriminant analysis method. The parenchymal kinetic curves within the breast fibroglandular region were extracted and categorized by use of fuzzy c-means clustering, and various parenchymal kinetic characteristics were extracted from the most enhancing voxels. Correlation analysis between the computer-extracted percent dense measures and radiologist-noted BIRADS density ratings yielded a correlation coefficient of 0.76 (p<0.0001). From kinetic analyses, 70% (64/92) of most enhancing curves showed persistent curve type and reached peak parenchymal intensity at the last postcontrast time point; with 89% (82/92) of most enhancing curves reaching peak intensity at either 4th or 5th post-contrast time points. Women with dense breast (BIRADS 3 and 4) were found to have more parenchymal enhancement at their peak time point (Ep) with an average Ep of 116.5% while those women with fatty breasts (BIRADS 1 and 2) demonstrated an average Ep of 62.0%. In conclusion, breast parenchymal enhancement may be associated with breast density and may be potential useful as an additional characteristic for assessing breast cancer risk.
The Stability of Perceived Pubertal Timing across Adolescence
Cance, Jessica Duncan; Ennett, Susan T.; Morgan-Lopez, Antonio A.; Foshee, Vangie A.
2011-01-01
It is unknown whether perceived pubertal timing changes as puberty progresses or whether it is an important component of adolescent identity formation that is fixed early in pubertal development. The purpose of this study is to examine the stability of perceived pubertal timing among a school-based sample of rural adolescents aged 11 to 17 (N=6,425; 50% female; 53% White). Two measures of pubertal timing were used, stage-normative, based on the Pubertal Development Scale, a self-report scale of secondary sexual characteristics, and peer-normative, a one-item measure of perceived pubertal timing. Two longitudinal methods were used: one-way random effects ANOVA models and latent class analysis. When calculating intraclass correlation coefficients using the one-way random effects ANOVA models, which is based on the average reliability from one time point to the next, both measures had similar, but poor, stability. In contrast, latent class analysis, which looks at the longitudinal response pattern of each individual and treats deviation from that pattern as measurement error, showed three stable and distinct response patterns for both measures: always early, always on-time, and always late. Study results suggest instability in perceived pubertal timing from one age to the next, but this instability is likely due to measurement error. Thus, it may be necessary to take into account the longitudinal pattern of perceived pubertal timing across adolescence rather than measuring perceived pubertal timing at one point in time. PMID:21983873
Microarray characterization of gene expression changes in blood during acute ethanol exposure
2013-01-01
Background As part of the civil aviation safety program to define the adverse effects of ethanol on flying performance, we performed a DNA microarray analysis of human whole blood samples from a five-time point study of subjects administered ethanol orally, followed by breathalyzer analysis, to monitor blood alcohol concentration (BAC) to discover significant gene expression changes in response to the ethanol exposure. Methods Subjects were administered either orange juice or orange juice with ethanol. Blood samples were taken based on BAC and total RNA was isolated from PaxGene™ blood tubes. The amplified cDNA was used in microarray and quantitative real-time polymerase chain reaction (RT-qPCR) analyses to evaluate differential gene expression. Microarray data was analyzed in a pipeline fashion to summarize and normalize and the results evaluated for relative expression across time points with multiple methods. Candidate genes showing distinctive expression patterns in response to ethanol were clustered by pattern and further analyzed for related function, pathway membership and common transcription factor binding within and across clusters. RT-qPCR was used with representative genes to confirm relative transcript levels across time to those detected in microarrays. Results Microarray analysis of samples representing 0%, 0.04%, 0.08%, return to 0.04%, and 0.02% wt/vol BAC showed that changes in gene expression could be detected across the time course. The expression changes were verified by qRT-PCR. The candidate genes of interest (GOI) identified from the microarray analysis and clustered by expression pattern across the five BAC points showed seven coordinately expressed groups. Analysis showed function-based networks, shared transcription factor binding sites and signaling pathways for members of the clusters. These include hematological functions, innate immunity and inflammation functions, metabolic functions expected of ethanol metabolism, and pancreatic and hepatic function. Five of the seven clusters showed links to the p38 MAPK pathway. Conclusions The results of this study provide a first look at changing gene expression patterns in human blood during an acute rise in blood ethanol concentration and its depletion because of metabolism and excretion, and demonstrate that it is possible to detect changes in gene expression using total RNA isolated from whole blood. The analysis approach for this study serves as a workflow to investigate the biology linked to expression changes across a time course and from these changes, to identify target genes that could serve as biomarkers linked to pilot performance. PMID:23883607
NASA Astrophysics Data System (ADS)
Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.
2017-01-01
Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.
2011-01-01
Background The Prospective Space-Time scan statistic (PST) is widely used for the evaluation of space-time clusters of point event data. Usually a window of cylindrical shape is employed, with a circular or elliptical base in the space domain. Recently, the concept of Minimum Spanning Tree (MST) was applied to specify the set of potential clusters, through the Density-Equalizing Euclidean MST (DEEMST) method, for the detection of arbitrarily shaped clusters. The original map is cartogram transformed, such that the control points are spread uniformly. That method is quite effective, but the cartogram construction is computationally expensive and complicated. Results A fast method for the detection and inference of point data set space-time disease clusters is presented, the Voronoi Based Scan (VBScan). A Voronoi diagram is built for points representing population individuals (cases and controls). The number of Voronoi cells boundaries intercepted by the line segment joining two cases points defines the Voronoi distance between those points. That distance is used to approximate the density of the heterogeneous population and build the Voronoi distance MST linking the cases. The successive removal of edges from the Voronoi distance MST generates sub-trees which are the potential space-time clusters. Finally, those clusters are evaluated through the scan statistic. Monte Carlo replications of the original data are used to evaluate the significance of the clusters. An application for dengue fever in a small Brazilian city is presented. Conclusions The ability to promptly detect space-time clusters of disease outbreaks, when the number of individuals is large, was shown to be feasible, due to the reduced computational load of VBScan. Instead of changing the map, VBScan modifies the metric used to define the distance between cases, without requiring the cartogram construction. Numerical simulations showed that VBScan has higher power of detection, sensitivity and positive predicted value than the Elliptic PST. Furthermore, as VBScan also incorporates topological information from the point neighborhood structure, in addition to the usual geometric information, it is more robust than purely geometric methods such as the elliptic scan. Those advantages were illustrated in a real setting for dengue fever space-time clusters. PMID:21513556
Lang, Paul Z; Thulasi, Praneetha; Khandelwal, Sumitra S; Hafezi, Farhad; Randleman, J Bradley
2018-05-02
To evaluate the correlation between anterior axial curvature difference maps following corneal cross-linking (CXL) for progressive keratoconus obtained from Scheimpflug-based tomography and Placido-based topography. Between-devices reliability analysis of randomized clinical trial data METHODS: Corneal imaging was collected at a single center institution pre-operatively and at 3, 6, and 12 months post-operatively using Scheimpflug-based tomography (Pentacam, Oculus Inc., Lynnwood, WA) and Scanning-slit, Placido-based topography (Orbscan II, Bausch & Lomb, Rochester, NY) in patients with progressive keratoconus receiving standard protocol CXL (3mW/cm 2 for 30 minutes). Regularization index (RI), absolute maximum keratometry (K Max), and change in (ΔK Max) were compared between the two devices at each time point. 51 eyes from 36 patients were evaluated at all time points. values were significantly different at all time points [56.01±5.3D Scheimpflug vs. 55.04±5.1D scanning-slit pre-operatively (p=0.003); 54.58±5.3D Scheimpflug vs. 53.12±4.9D scanning-slit at 12 months (p<0.0001)] but strongly correlated between devices (r=0.90-0.93) at all time points. The devices were not significantly different at any time point for either ΔK Max or RI but were poorly correlated at all time points (r=0.41-0.53 for ΔK Max, r=0.29-0.48 for RI). At 12 months, 95% LOA was 7.51D for absolute, 8.61D for ΔK Max, and 19.86D for RI. Measurements using Scheimpflug and scanning-slit Placido-based technology are correlated but not interchangeable. Both devices appear reasonable for separately monitoring the cornea's response to CXL; however, caution should be used when comparing results obtained with one measuring technology to the other. Copyright © 2018 Elsevier Inc. All rights reserved.
Shafer, Steven L; Lemmer, Bjoern; Boselli, Emmanuel; Boiste, Fabienne; Bouvet, Lionel; Allaouchiche, Bernard; Chassard, Dominique
2010-10-01
The duration of analgesia from epidural administration of local anesthetics to parturients has been shown to follow a rhythmic pattern according to the time of drug administration. We studied whether there was a similar pattern after intrathecal administration of bupivacaine in parturients. In the course of the analysis, we came to believe that some data points coincident with provider shift changes were influenced by nonbiological, health care system factors, thus incorrectly suggesting a periodic signal in duration of labor analgesia. We developed graphical and analytical tools to help assess the influence of individual points on the chronobiological analysis. Women with singleton term pregnancies in vertex presentation, cervical dilation 3 to 5 cm, pain score >50 mm (of 100 mm), and requesting labor analgesia were enrolled in this study. Patients received 2.5 mg of intrathecal bupivacaine in 2 mL using a combined spinal-epidural technique. Analgesia duration was the time from intrathecal injection until the first request for additional analgesia. The duration of analgesia was analyzed by visual inspection of the data, application of smoothing functions (Supersmoother; LOWESS and LOESS [locally weighted scatterplot smoothing functions]), analysis of variance, Cosinor (Chronos-Fit), Excel, and NONMEM (nonlinear mixed effect modeling). Confidence intervals (CIs) were determined by bootstrap analysis (1000 replications with replacement) using PLT Tools. Eighty-two women were included in the study. Examination of the raw data using 3 smoothing functions revealed a bimodal pattern, with a peak at approximately 0630 and a subsequent peak in the afternoon or evening, depending on the smoother. Analysis of variance did not identify any statistically significant difference between the duration of analgesia when intrathecal injection was given from midnight to 0600 compared with the duration of analgesia after intrathecal injection at other times. Chronos-Fit, Excel, and NONMEM produced identical results, with a mean duration of analgesia of 38.4 minutes (95% CI: 35.4-41.6 minutes), an 8-hour periodic waveform with an amplitude of 5.8 minutes (95% CI: 2.1-10.7 minutes), and a phase offset of 6.5 hours (95% CI: 5.4-8.0 hours) relative to midnight. The 8-hour periodic model did not reach statistical significance in 40% of bootstrap analyses, implying that statistical significance of the 8-hour periodic model was dependent on a subset of the data. Two data points before the change of shift at 0700 contributed most strongly to the statistical significance of the periodic waveform. Without these data points, there was no evidence of an 8-hour periodic waveform for intrathecal bupivacaine analgesia. Chronobiology includes the influence of external daily rhythms in the environment (e.g., nursing shifts) as well as human biological rhythms. We were able to distinguish the influence of an external rhythm by combining several novel analyses: (1) graphical presentation superimposing the raw data, external rhythms (e.g., nursing and anesthesia provider shifts), and smoothing functions; (2) graphical display of the contribution of each data point to the statistical significance; and (3) bootstrap analysis to identify whether the statistical significance was highly dependent on a data subset. These approaches suggested that 2 data points were likely artifacts of the change in nursing and anesthesia shifts. When these points were removed, there was no suggestion of biological rhythm in the duration of intrathecal bupivacaine analgesia.
Smoke-Point Properties of Non-Buoyant Round Laminar Jet Diffusion Flames. Appendix J
NASA Technical Reports Server (NTRS)
Urban, D. L.; Yuan, Z.-G.; Sunderland, P. B.; Lin, K.-C.; Dai, Z.; Faeth, G. M.
2000-01-01
The laminar smoke-point properties of non-buoyant round laminar jet diffusion flames were studied emphasizing results from long-duration (100-230 s) experiments at microgravity carried out in orbit aboard the space shuttle Columbia. Experimental conditions included ethylene- and propane-fueled flames burning in still air at an ambient temperature of 300 K, pressures of 35-130 kPa, jet exit diameters of 1.6 and 2.7 mm, jet exit velocities of 170-690 mm/s, jet exit Reynolds numbers of 46-172, characteristic flame residence times of 40-302 ms, and luminous flame lengths of 15-63 mm. Contrary to the normal-gravity laminar smoke point, in microgravity, the onset of laminar smoke-point conditions involved two flame configurations: closed-tip flames with soot emissions along the flame axis and open-tip flames with soot emissions from an annular ring about the flame axis. Open-tip flames were observed at large characteristic flame residence times with the onset of soot emissions associated with radiative quenching near the flame tip: nevertheless, unified correlations of laminar smoke-point properties were obtained that included both flame configurations. Flame lengths at laminar smoke-point conditions were well correlated in terms of a corrected fuel flow rate suggested by a simplified analysis of flame shape. The present steady and non-buoyant flames emitted soot more readily than non-buoyant flames in earlier tests using ground-based microgravity facilities and than buoyant flames at normal gravity, as a result of reduced effects of unsteadiness, flame disturbances, and buoyant motion. For example, present measurements of laminar smoke-point flame lengths at comparable conditions were up to 2.3 times shorter than ground-based microgravity measurements and up to 6.4 times shorter than buoyant flame measurements. Finally, present laminar smoke-point flame lengths were roughly inversely proportional to pressure to a degree that is a somewhat smaller than observed during earlier tests both at microgravity (using ground-based facilities) and at normal gravity.
Kumar, Rajeev; Pitcher, Tony J.; Varkey, Divya A.
2017-01-01
We present a comprehensive analysis of estimation of fisheries Maximum Sustainable Yield (MSY) reference points using an ecosystem model built for Mille Lacs Lake, the second largest lake within Minnesota, USA. Data from single-species modelling output, extensive annual sampling for species abundances, annual catch-survey, stomach-content analysis for predatory-prey interactions, and expert opinions were brought together within the framework of an Ecopath with Ecosim (EwE) ecosystem model. An increase in the lake water temperature was observed in the last few decades; therefore, we also incorporated a temperature forcing function in the EwE model to capture the influences of changing temperature on the species composition and food web. The EwE model was fitted to abundance and catch time-series for the period 1985 to 2006. Using the ecosystem model, we estimated reference points for most of the fished species in the lake at single-species as well as ecosystem levels with and without considering the influence of temperature change; therefore, our analysis investigated the trophic and temperature effects on the reference points. The paper concludes that reference points such as MSY are not stationary, but change when (1) environmental conditions alter species productivity and (2) fishing on predators alters the compensatory response of their prey. Thus, it is necessary for the management to re-estimate or re-evaluate the reference points when changes in environmental conditions and/or major shifts in species abundance or community structure are observed. PMID:28957387
Limited sampling strategy for determining metformin area under the plasma concentration–time curve
Santoro, Ana Beatriz; Stage, Tore Bjerregaard; Struchiner, Claudio José; Christensen, Mette Marie Hougaard; Brosen, Kim
2016-01-01
Aim The aim was to develop and validate limited sampling strategy (LSS) models to predict the area under the plasma concentration–time curve (AUC) for metformin. Methods Metformin plasma concentrations (n = 627) at 0–24 h after a single 500 mg dose were used for LSS development, based on all subsets linear regression analysis. The LSS‐derived AUC(0,24 h) was compared with the parameter ‘best estimate’ obtained by non‐compartmental analysis using all plasma concentration data points. Correlation between the LSS‐derived and the best estimated AUC(0,24 h) (r 2), bias and precision of the LSS estimates were quantified. The LSS models were validated in independent cohorts. Results A two‐point (3 h and 10 h) regression equation with no intercept estimated accurately the individual AUC(0,24 h) in the development cohort: r 2 = 0.927, bias (mean, 95% CI) –0.5, −2.7–1.8% and precision 6.3, 4.9–7.7%. The accuracy of the two point LSS model was verified in study cohorts of individuals receiving single 500 or 1000 mg (r 2 = –0.933–0.934) or seven 1000 mg daily doses (r 2 = 0.918), as well as using data from 16 published studies covering a wide range of metformin doses, demographics, clinical and experimental conditions (r 2 = 0.976). The LSS model reproduced previously reported results for effects of polymorphisms in OCT2 and MATE1 genes on AUC(0,24 h) and renal clearance of metformin. Conclusions The two point LSS algorithm may be used to assess the systemic exposure to metformin under diverse conditions, with reduced costs of sampling and analysis, and saving time for both subjects and investigators. PMID:27324407
Analysis of Land Subsidence Monitoring in Mining Area with Time-Series Insar Technology
NASA Astrophysics Data System (ADS)
Sun, N.; Wang, Y. J.
2018-04-01
Time-series InSAR technology has become a popular land subsidence monitoring method in recent years, because of its advantages such as high accuracy, wide area, low expenditure, intensive monitoring points and free from accessibility restrictions. In this paper, we applied two kinds of satellite data, ALOS PALSAR and RADARSAT-2, to get the subsidence monitoring results of the study area in two time periods by time-series InSAR technology. By analyzing the deformation range, rate and amount, the time-series analysis of land subsidence in mining area was realized. The results show that InSAR technology could be used to monitor land subsidence in large area and meet the demand of subsidence monitoring in mining area.
Zhang, Yuji
2015-01-01
Molecular networks act as the backbone of molecular activities within cells, offering a unique opportunity to better understand the mechanism of diseases. While network data usually constitute only static network maps, integrating them with time course gene expression information can provide clues to the dynamic features of these networks and unravel the mechanistic driver genes characterizing cellular responses. Time course gene expression data allow us to broadly "watch" the dynamics of the system. However, one challenge in the analysis of such data is to establish and characterize the interplay among genes that are altered at different time points in the context of a biological process or functional category. Integrative analysis of these data sources will lead us a more complete understanding of how biological entities (e.g., genes and proteins) coordinately perform their biological functions in biological systems. In this paper, we introduced a novel network-based approach to extract functional knowledge from time-dependent biological processes at a system level using time course mRNA sequencing data in zebrafish embryo development. The proposed method was applied to investigate 1α, 25(OH)2D3-altered mechanisms in zebrafish embryo development. We applied the proposed method to a public zebrafish time course mRNA-Seq dataset, containing two different treatments along four time points. We constructed networks between gene ontology biological process categories, which were enriched in differential expressed genes between consecutive time points and different conditions. The temporal propagation of 1α, 25-Dihydroxyvitamin D3-altered transcriptional changes started from a few genes that were altered initially at earlier stage, to large groups of biological coherent genes at later stages. The most notable biological processes included neuronal and retinal development and generalized stress response. In addition, we also investigated the relationship among biological processes enriched in co-expressed genes under different conditions. The enriched biological processes include translation elongation, nucleosome assembly, and retina development. These network dynamics provide new insights into the impact of 1α, 25-Dihydroxyvitamin D3 treatment in bone and cartilage development. We developed a network-based approach to analyzing the DEGs at different time points by integrating molecular interactions and gene ontology information. These results demonstrate that the proposed approach can provide insight on the molecular mechanisms taking place in vertebrate embryo development upon treatment with 1α, 25(OH)2D3. Our approach enables the monitoring of biological processes that can serve as a basis for generating new testable hypotheses. Such network-based integration approach can be easily extended to any temporal- or condition-dependent genomic data analyses.
The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes
NASA Astrophysics Data System (ADS)
Bhatnagar, S.; Cornwell, T. J.
2017-11-01
This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.
The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu
This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measuredmore » a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.« less
Zaylaa, Amira; Charara, Jamal; Girault, Jean-Marc
2015-08-01
The analysis of biomedical signals demonstrating complexity through recurrence plots is challenging. Quantification of recurrences is often biased by sojourn points that hide dynamic transitions. To overcome this problem, time series have previously been embedded at high dimensions. However, no one has quantified the elimination of sojourn points and rate of detection, nor the enhancement of transition detection has been investigated. This paper reports our on-going efforts to improve the detection of dynamic transitions from logistic maps and fetal hearts by reducing sojourn points. Three signal-based recurrence plots were developed, i.e. embedded with specific settings, derivative-based and m-time pattern. Determinism, cross-determinism and percentage of reduced sojourn points were computed to detect transitions. For logistic maps, an increase of 50% and 34.3% in sensitivity of detection over alternatives was achieved by m-time pattern and embedded recurrence plots with specific settings, respectively, and with a 100% specificity. For fetal heart rates, embedded recurrence plots with specific settings provided the best performance, followed by derivative-based recurrence plot, then unembedded recurrence plot using the determinism parameter. The relative errors between healthy and distressed fetuses were 153%, 95% and 91%. More than 50% of sojourn points were eliminated, allowing better detection of heart transitions triggered by gaseous exchange factors. This could be significant in improving the diagnosis of fetal state. Copyright © 2014 Elsevier Ltd. All rights reserved.
Probing me Reverse Shock in an Oxygen-Rich Supernova Remnant
NASA Technical Reports Server (NTRS)
Gaetz, Terrance; Sonneborn, George (Technical Monitor)
2004-01-01
The aim of this project is to examine the O VI emission at three positions around the X- ray bright ring of the remnant in order to investigate the relation between the O VI emission, the X-ray O VII and O VIII emission, and the optical [OIII} emission, and how these vary around the rim of the remnant. All three pointings and the background pointing have now been observed; the archive notification for the most recent dataset was Oct 30, 2003. After reprocessing and screening, the net exposure time for the SE exposure is only 54 percent of the approved time (15 kilosec). for the SE exposure, the available statistics are not good enough for analysis. A request for reobservation to make up for the lost time in the SE pointing has been approved. Broad O VI 1032 and O VI 1038 emission is detected with velocity width of at least 800 km/s, and possibly exceeding 1000 km/s. The Flanagan et al. analysis of the Chandra grating data show bulk velocities in the X-ray gas of order +/- 1000 km/s. In the region of the FUSE E0lO2-SE pointing, the Chandra data indicate both blue-shifted and red-shifted emission. Analysis of the velocity structure of the O VI emission will provide additional constraints on the kinematics of the gas: is it emission from a tilted expanding barrel, or a more symmetric expansion? The O VI fluxes are also needed to assess whether the O VI is radiation from recombining O VII or instead fiom cooler gas ionizing toward O VII. The emission is faint, however, which complicates the analysis. Because the lines are so broad, absorption by intervening H2, CII, and foreground OVI must be considered. A number of stars in the SMC have been observed which provide information on foreground OVI absorption. The initial analyses have concentrated on the Li1F channel since the guidance is based on that channel. The Li2F channel is being examined exposure by exposure to see if any of the data can be used to improve the signal to noise in the Li1F data.
Reduction of VSC and salivary bacteria by a multibenefit mouthrinse.
Boyd, T; Vazquez, J; Williams, M
2008-03-01
To evaluate the effectiveness of a multibenefit mouthrinse containing 0.05% cetylpyridinium chloride (CPC) and 0.025% sodium fluoride in reducing volatile sulfur compound (VSC) levels and total cultivable salivary bacteria, at both 4 h and overnight. In vitro analysis of efficacy was performed using saliva-coated hydroxyapatite disc substrates first treated with the mouthrinse, then exposed to whole human saliva, followed by overnight incubation in air-tight vials. Headspace VSC was quantified by gas chromatography (GC). A clinical evaluation was conducted with 14 subjects using a crossover design. After a seven-day washout period, baseline clinical measurement of VSC was performed by GC analysis of mouth air sampled in the morning prior to eating, drinking or performing any oral hygiene. A 10 mL saline rinse was used to sample and enumerate cultivable salivary bacterial levels via serial dilution and plating. Subjects were instructed to use the treatment rinse twice daily in combination with a controlled brushing regimen. After one week the subjects returned in the morning prior to eating, drinking or performing oral hygiene to provide samples of overnight mouth air and salivary bacteria. The subjects were then immediately rinsed with the test product, and provided additional mouth air and saliva rinse samples 4 h later. A multibenefit rinse containing 0.05% CPC and 0.025% sodium fluoride was found to reduce VSC in vitro by 52%. The rinse also demonstrated a significant clinical reduction in breath VSC (p < 0.05) of 55.8% at 4 h and 23.4% overnight relative to baseline VSC levels. At both time points, the multibenefit rinse was more effective than the control; this difference was statistically significant at the overnight time point (p < 0.05). Total cultivable salivary bacteria levels were also reduced significantly (p < 0.05) at 4 h and overnight by this mouthrinse compared to baseline levels and the control. A multibenefit mouthrinse was shown to reduce in vitro VSC levels via headspace analysis and clinically at the 4 h and overnight time points. A significant reduction in total cultivable salivary bacteria was also observed at all time points, supporting the VSC data.
Alar-columellar and lateral nostril changes following tongue-in-groove rhinoplasty.
Shah, Ajul; Pfaff, Miles; Kinsman, Gianna; Steinbacher, Derek M
2015-04-01
Repositioning the medial crura cephalically onto the caudal septum (tongue-in-groove; TIG) allows alteration of the columella, ala, and nasal tip to address alar-columellar disproportion as seen from the lateral view. To date, quantitative analysis of nostril dimension, alar-columellar relationship, and nasal tip changes following the TIG rhinoplasty technique have not been described. The present study aims to evaluate post-operative lateral morphometric changes following TIG. Pre- and post-operative lateral views of a series of consecutive patients who underwent TIG rhinoplasty were produced from 3D images at multiple time points (≤2 weeks, 4-10 weeks, and >10 weeks post-operatively) for analysis. The 3D images were converted to 2D and set to scale. Exposed lateral nostril area, alar-columellar disproportion (divided into superior and inferior heights), nasolabial angle, nostril height, and nostril length were calculated and statistically analyzed using a pairwise t test. A P ≤ 0.05 was considered statistically significant. Ninety-four lateral views were analyzed from 20 patients (16 females; median age: 31.8). One patient had a history of current tobacco cigarette use. Lateral nostril area decreased at all time points post-operatively, in a statistically significant fashion. Alar-columellar disproportion was reduced following TIG at all time points. The nasolabial angle significantly increased post-operatively at ≤2 weeks, 4-10 weeks, and >10, all in a statistically significant fashion. Nostril height and nostril length decreased at all post-operative time points. Morphometric analysis reveals reduction in alar-columellar disproportion and lateral nostril shows following TIG rhinoplasty. Tip rotation, as a function of nasolabial angle, also increased. These results provide quantitative substantiation for qualitative descriptions attributed to the TIG technique. Future studies will focus on area and volumetric measurements, and assessment of long-term stability. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Croxford, Adam E; Rogers, Tom; Caligari, Peter D S; Wilkinson, Michael J
2008-01-01
* The provision of sequence-tagged site (STS) anchor points allows meaningful comparisons between mapping studies but can be a time-consuming process for nonmodel species or orphan crops. * Here, the first use of high-resolution melt analysis (HRM) to generate STS markers for use in linkage mapping is described. This strategy is rapid and low-cost, and circumvents the need for labelled primers or amplicon fractionation. * Using white lupin (Lupinus albus, x = 25) as a case study, HRM analysis was applied to identify 91 polymorphic markers from expressed sequence tag (EST)-derived and genomic libraries. Of these, 77 generated STS anchor points in the first fully resolved linkage map of the species. The map also included 230 amplified fragment length polymorphisms (AFLP) loci, spanned 1916 cM (84.2% coverage) and divided into the expected 25 linkage groups. * Quantitative trait loci (QTL) analyses performed on the population revealed genomic regions associated with several traits, including the agronomically important time to flowering (tf), alkaloid synthesis and stem height (Ph). Use of HRM-STS markers also allowed us to make direct comparisons between our map and that of the related crop, Lupinus angustifolius, based on the conversion of RFLP, microsatellite and single nucleotide polymorphism (SNP) markers into HRM markers.
Vaughn, Amber E; Dearth-Wesley, Tracy; Tabak, Rachel G; Bryant, Maria; Ward, Dianne S
2017-02-01
Parents' food parenting practices influence children's dietary intake and risk for obesity and chronic disease. Understanding the influence and interactions between parents' practices and children's behavior is limited by a lack of development and psychometric testing and/or limited scope of current measures. The Home Self-Administered Tool for Environmental Assessment of Activity and Diet (HomeSTEAD) was created to address this gap. This article describes development and psychometric testing of the HomeSTEAD family food practices survey. Between August 2010 and May 2011, a convenience sample of 129 parents of children aged 3 to 12 years were recruited from central North Carolina and completed the self-administered HomeSTEAD survey on three occasions during a 12- to 18-day window. Demographic characteristics and child diet were assessed at Time 1. Child height and weight were measured during the in-home observations (following Time 1 survey). Exploratory factor analysis with Time 1 data was used to identify potential scales. Scales with more than three items were examined for scale reduction. Following this, mean scores were calculated at each time point. Construct validity was assessed by examining Spearman rank correlations between mean scores (Time 1) and children's diet (fruits and vegetables, sugar-sweetened beverages, snacks, sweets) and body mass index (BMI) z scores. Repeated measures analysis of variance was used to examine differences in mean scores between time points, and single-measure intraclass correlations were calculated to examine test-retest reliability between time points. Exploratory factor analysis identified 24 factors and retained 124 items; however, scale reduction narrowed items to 86. The final instrument captures five coercive control practices (16 items), seven autonomy support practices (24 items), and 12 structure practices (46 items). All scales demonstrated good internal reliability (α>.62), 18 factors demonstrated construct validity (significant association with child diet, P<0.05), and 22 demonstrated good reliability (intraclass correlation coefficient>0.61). The HomeSTEAD family food practices survey provides a brief, yet comprehensive and psychometrically sound assessment of food parenting practices. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
A Comprehensive Analysis of the Quality of Online Health-Related Information regarding Schizophrenia
ERIC Educational Resources Information Center
Guada, Joseph; Venable, Victoria
2011-01-01
Social workers are major mental health providers and, thus, can be key players in guiding consumers and their families to accurate information regarding schizophrenia. The present study, using the WebMedQual scale, is a comprehensive analysis across a one-year period at two different time points of the top for-profit and nonprofit sites that…
Simpao, Allan F; Tan, Jonathan M; Lingappan, Arul M; Gálvez, Jorge A; Morgan, Sherry E; Krall, Michael A
2017-10-01
Anesthesia information management systems (AIMS) are sophisticated hardware and software technology solutions that can provide electronic feedback to anesthesia providers. This feedback can be tailored to provide clinical decision support (CDS) to aid clinicians with patient care processes, documentation compliance, and resource utilization. We conducted a systematic review of peer-reviewed articles on near real-time and point-of-care CDS within AIMS using the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols. Studies were identified by searches of the electronic databases Medline and EMBASE. Two reviewers screened studies based on title, abstract, and full text. Studies that were similar in intervention and desired outcome were grouped into CDS categories. Three reviewers graded the evidence within each category. The final analysis included 25 articles on CDS as implemented within AIMS. CDS categories included perioperative antibiotic prophylaxis, post-operative nausea and vomiting prophylaxis, vital sign monitors and alarms, glucose management, blood pressure management, ventilator management, clinical documentation, and resource utilization. Of these categories, the reviewers graded perioperative antibiotic prophylaxis and clinical documentation as having strong evidence per the peer reviewed literature. There is strong evidence for the inclusion of near real-time and point-of-care CDS in AIMS to enhance compliance with perioperative antibiotic prophylaxis and clinical documentation. Additional research is needed in many other areas of AIMS-based CDS.
Hepatic gene expression patterns following trauma-hemorrhage: effect of posttreatment with estrogen.
Yu, Huang-Ping; Pang, See-Tong; Chaudry, Irshad H
2013-01-01
The aim of this study was to examine the role of estrogen on hepatic gene expression profiles at an early time point following trauma-hemorrhage in rats. Groups of injured and sham controls receiving estrogen or vehicle were killed 2 h after injury and resuscitation, and liver tissue was harvested. Complementary RNA was synthesized from each RNA sample and hybridized to microarrays. A large number of genes were differentially expressed at the 2-h time point in injured animals with or without estrogen treatment. The upregulation or downregulation of a cohort of 14 of these genes was validated by reverse transcription-polymerase chain reaction. This large-scale microarray analysis shows that at the 2-h time point, there is marked alteration in hepatic gene expression following trauma-hemorrhage. However, estrogen treatment attenuated these changes in injured animals. Pathway analysis demonstrated predominant changes in the expression of genes involved in metabolism, immunity, and apoptosis. Upregulation of low-density lipoprotein receptor, protein phosphatase 1, regulatory subunit 3C, ring-finger protein 11, pyroglutamyl-peptidase I, bactericidal/permeability-increasing protein, integrin, αD, BCL2-like 11, leukemia inhibitory factor receptor, ATPase, Cu transporting, α polypeptide, and Mk1 protein was found in estrogen-treated trauma-hemorrhaged animals. Thus, estrogen produces hepatoprotection following trauma-hemorrhage likely via antiapoptosis and improving/restoring metabolism and immunity pathways.
Middendorf, Jill M; Shortkroff, Sonya; Dugopolski, Caroline; Kennedy, Stephen; Siemiatkoski, Joseph; Bartell, Lena R; Cohen, Itai; Bonassar, Lawrence J
2017-11-07
Many studies have measured the global compressive properties of tissue engineered (TE) cartilage grown on porous scaffolds. Such scaffolds are known to exhibit strain softening due to local buckling under loading. As matrix is deposited onto these scaffolds, the global compressive properties increase. However the relationship between the amount and distribution of matrix in the scaffold and local buckling is unknown. To address this knowledge gap, we studied how local strain and construct buckling in human TE constructs changes over culture times and GAG content. Confocal elastography techniques and digital image correlation (DIC) were used to measure and record buckling modes and local strains. Receiver operating characteristic (ROC) curves were used to quantify construct buckling. The results from the ROC analysis were placed into Kaplan-Meier survival function curves to establish the probability that any point in a construct buckled. These analysis techniques revealed the presence of buckling at early time points, but bending at later time points. An inverse correlation was observed between the probability of buckling and the total GAG content of each construct. This data suggests that increased GAG content prevents the onset of construct buckling and improves the microscale compressive tissue properties. This increase in GAG deposition leads to enhanced global compressive properties by prevention of microscale buckling. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert
2014-05-01
LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar, profile and 3D views since it reduces crowding of the scene and delivers intuitive contextual information. The resulting visualization has proved useful for vegetation analysis for habitat mapping, and can also be applied as a first step for point cloud level classification. An interactive demonstration of the visualization script is shown during poster attendance, including the opportunity to view your own point cloud sample files.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density.
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done. PMID:29765345
A new similarity index for nonlinear signal analysis based on local extrema patterns
NASA Astrophysics Data System (ADS)
Niknazar, Hamid; Motie Nasrabadi, Ali; Shamsollahi, Mohammad Bagher
2018-02-01
Common similarity measures of time domain signals such as cross-correlation and Symbolic Aggregate approximation (SAX) are not appropriate for nonlinear signal analysis. This is because of the high sensitivity of nonlinear systems to initial points. Therefore, a similarity measure for nonlinear signal analysis must be invariant to initial points and quantify the similarity by considering the main dynamics of signals. The statistical behavior of local extrema (SBLE) method was previously proposed to address this problem. The SBLE similarity index uses quantized amplitudes of local extrema to quantify the dynamical similarity of signals by considering patterns of sequential local extrema. By adding time information of local extrema as well as fuzzifying quantized values, this work proposes a new similarity index for nonlinear and long-term signal analysis, which extends the SBLE method. These new features provide more information about signals and reduce noise sensitivity by fuzzifying them. A number of practical tests were performed to demonstrate the ability of the method in nonlinear signal clustering and classification on synthetic data. In addition, epileptic seizure detection based on electroencephalography (EEG) signal processing was done by the proposed similarity to feature the potentials of the method as a real-world application tool.
ERIC Educational Resources Information Center
Lucas, John A.
Analysis of the transcripts of 200 full-time and 200 part-time beginning traditional credit students randomly sampled from the population of students entering each fall from 1967 to 1975 at William Rainey Harper College indicated that: (1) overall student grade point average rose in direct relationship to changes in grading policy; (2) the grade…
This poster will present a modeling and mapping assessment of landscape sensitivity to non-point source pollution as applied to a hierarchy of catchment drainages in the Coastal Plain of the state of North Carolina. Analysis of the subsurface residence time of water in shallow a...
The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data
NASA Astrophysics Data System (ADS)
Markiewicz, Jakub Stefan
2016-06-01
The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.
Rajtmajer, Sarah M; Roy, Arnab; Albert, Reka; Molenaar, Peter C M; Hillary, Frank G
2015-01-01
Despite exciting advances in the functional imaging of the brain, it remains a challenge to define regions of interest (ROIs) that do not require investigator supervision and permit examination of change in networks over time (or plasticity). Plasticity is most readily examined by maintaining ROIs constant via seed-based and anatomical-atlas based techniques, but these approaches are not data-driven, requiring definition based on prior experience (e.g., choice of seed-region, anatomical landmarks). These approaches are limiting especially when functional connectivity may evolve over time in areas that are finer than known anatomical landmarks or in areas outside predetermined seeded regions. An ideal method would permit investigators to study network plasticity due to learning, maturation effects, or clinical recovery via multiple time point data that can be compared to one another in the same ROI while also preserving the voxel-level data in those ROIs at each time point. Data-driven approaches (e.g., whole-brain voxelwise approaches) ameliorate concerns regarding investigator bias, but the fundamental problem of comparing the results between distinct data sets remains. In this paper we propose an approach, aggregate-initialized label propagation (AILP), which allows for data at separate time points to be compared for examining developmental processes resulting in network change (plasticity). To do so, we use a whole-brain modularity approach to parcellate the brain into anatomically constrained functional modules at separate time points and then apply the AILP algorithm to form a consensus set of ROIs for examining change over time. To demonstrate its utility, we make use of a known dataset of individuals with traumatic brain injury sampled at two time points during the first year of recovery and show how the AILP procedure can be applied to select regions of interest to be used in a graph theoretical analysis of plasticity.
The a(3) Scheme--A Fourth-Order Space-Time Flux-Conserving and Neutrally Stable CESE Solver
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2008-01-01
The CESE development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a non-dissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To initiate a systematic CESE development of high order schemes, in this paper we provide a thorough discussion on the structure, consistency, stability, phase error, and accuracy of a new 4th-order space-time flux-conserving and neutrally stable CESE solver of an 1D scalar advection equation. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and three points at the lower time level. Because it is associated with three independent mesh variables (the numerical analogues of the dependent variable and its 1st-order and 2ndorder spatial derivatives, respectively) and three equations per mesh point, the new scheme is referred to as the a(3) scheme. Through the von Neumann analysis, it is shown that the a(3) scheme is stable if and only if the Courant number is less than 0.5. Moreover, it is established numerically that the a(3) scheme is 4th-order accurate.
Spectral reconstruction analysis for enhancing signal-to-noise in time-resolved spectroscopies
NASA Astrophysics Data System (ADS)
Wilhelm, Michael J.; Smith, Jonathan M.; Dai, Hai-Lung
2015-09-01
We demonstrate a new spectral analysis for the enhancement of the signal-to-noise ratio (SNR) in time-resolved spectroscopies. Unlike the simple linear average which produces a single representative spectrum with enhanced SNR, this Spectral Reconstruction analysis (SRa) improves the SNR (by a factor of ca. 0 . 6 √{ n } ) for all n experimentally recorded time-resolved spectra. SRa operates by eliminating noise in the temporal domain, thereby attenuating noise in the spectral domain, as follows: Temporal profiles at each measured frequency are fit to a generic mathematical function that best represents the temporal evolution; spectra at each time are then reconstructed with data points from the fitted profiles. The SRa method is validated with simulated control spectral data sets. Finally, we apply SRa to two distinct experimentally measured sets of time-resolved IR emission spectra: (1) UV photolysis of carbonyl cyanide and (2) UV photolysis of vinyl cyanide.
Zanderigo, Francesca; Sparacino, Giovanni; Kovatchev, Boris; Cobelli, Claudio
2007-09-01
The aim of this article was to use continuous glucose error-grid analysis (CG-EGA) to assess the accuracy of two time-series modeling methodologies recently developed to predict glucose levels ahead of time using continuous glucose monitoring (CGM) data. We considered subcutaneous time series of glucose concentration monitored every 3 minutes for 48 hours by the minimally invasive CGM sensor Glucoday® (Menarini Diagnostics, Florence, Italy) in 28 type 1 diabetic volunteers. Two prediction algorithms, based on first-order polynomial and autoregressive (AR) models, respectively, were considered with prediction horizons of 30 and 45 minutes and forgetting factors (ff) of 0.2, 0.5, and 0.8. CG-EGA was used on the predicted profiles to assess their point and dynamic accuracies using original CGM profiles as reference. Continuous glucose error-grid analysis showed that the accuracy of both prediction algorithms is overall very good and that their performance is similar from a clinical point of view. However, the AR model seems preferable for hypoglycemia prevention. CG-EGA also suggests that, irrespective of the time-series model, the use of ff = 0.8 yields the highest accurate readings in all glucose ranges. For the first time, CG-EGA is proposed as a tool to assess clinically relevant performance of a prediction method separately at hypoglycemia, euglycemia, and hyperglycemia. In particular, we have shown that CG-EGA can be helpful in comparing different prediction algorithms, as well as in optimizing their parameters.
Integrating Point-of-Care Testing into a Community Emergency Department: A Mixed-Methods Evaluation.
Pines, Jesse M; Zocchi, Mark S; Carter, Caitlin; Marriott, Charles Z; Bernard, Matthew; Warner, Leah H
2018-05-13
Point-of-care testing (POCT) is a commonly used technology that hastens the time to laboratory results in emergency departments (ED). We evaluated an ED-based POCT program on ED length of stay and time to care, coupled with qualitative interviews of local ED stakeholders. We conducted a mixed-methods study (2012-16) to examine the impact of point-of-care testing in a single, community ED. The quantiative analysis involved an observational before-after study comparing time to laboratory test result (POC troponin or POC chemistry) and ED length of stay after implementation of POCT, using a propensity-weighted interrupted time series analysis (ITSA). A complementary qualitative analysis involved five semi-structured interviews with staff using grounded theory on the benefits and challenges to ED POCT. A total of 47,399 ED visits were included in the study (24,705 in pre-intervention period and 22,694 in post-intervention). After POCT implementation, overall laboratory testing increased marginally from 61 to 62%. Central laboratory troponin and chemistry declined by >50% and was replaced by POCT. Prior to POCT implementation, time to troponin and chemistry had declined steadily due to other improvements in laboratory efficiency. After POCT implementation, there was an immediate 20 minute further decline (p<0.001) in both time to troponin and time to chemistry results using the propensity-weighted comparisons. However, the declining trend observed prior to POCT implementation did not continue at the same rate post implementation. Similarly, prior to POCT implementation, ED length of stay (LOS) declined due to other quality improvements. After POCT implementation, LOS continued declined at a similar rate. Because of this prior trend, the ITSA did not show a significant decline in LOS attributable to POCT. Common benefits of POCT perceived by staff in qualitative interviews included improved quality of care (64%), and reductions in time to test results (44%). Common challenges included concerns over POCT accuracy (32%), and technical barriers (29%). In the study ED, implementation of POCT was associated with a reduction in time to test result for both troponin and chemistry. Local staff felt that faster time to test result improved quality of care; however, concerns were raised with POCT accuracy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Preusker, Frank; Scholten, Frank; Matz, Klaus-Dieter; Roatsch, Thomas; Willner, Konrad; Hviid, Stubbe; Knollenberg, Jörg; Kührt, Ekkehard; Sierks, Holger
2015-04-01
The European Space Agency's Rosetta spacecraft is equipped with the OSIRIS imaging system which consists of a wide-angle and a narrow-angle camera (WAC and NAC). After the approach phase, Rosetta was inserted into a descent trajectory of comet 67P/Churyumov-Gerasimenko (C-G) in early August 2014. Until early September, OSIRIS acquired several hundred NAC images of C-G's surface at different scales (from ~5 m/pixel during approach to ~0.9 m/pixel during descent). In that one month observation period, the surface was imaged several times within different mapping sequences. With the comet's rotation period of ~12.4 h and the low spacecraft velocity (< 1 m/s), the entire NAC dataset provides multiple NAC stereo coverage, adequate for stereo-photogrammetric (SPG) analysis towards the derivation of 3D surface models. We constrained the OSIRIS NAC images with our stereo requirements (15° < stereo angles < 45°, incidence angles <85°, emission angles <45°, differences in illumination < 10°, scale better than 5 m/pixel) and extracted about 220 NAC images that provide at least triple stereo image coverage for the entire illuminated surface in about 250 independent multi-stereo image combinations. For each image combination we determined tie points by multi-image matching in order to set-up a 3D control network and a dense surface point cloud for the precise reconstruction of C-G's shape. The control point network defines the input for a stereo-photogrammetric least squares adjustment. Based on the statistical analysis of adjustments we first refined C-G's rotational state (pole orientation and rotational period) and its behavior over time. Based upon this description of the orientation of C-G's body-fixed reference frame, we derived corrections for the nominal navigation data (pointing and position) within a final stereo-photogrammetric block adjustment where the mean 3D point accuracy of more than 100 million surface points has been improved from ~10 m to the sub-meter range. We finally applied point filtering and interpolation techniques to these surface 3D points and show the resulting SPG-based 3D surface model with a lateral sampling rate of about 2 m.
Analysis of Roll Steering for Solar Electric Propulsion Missions
NASA Technical Reports Server (NTRS)
Pederson, Dylan, M.; Hojnicki, Jeffrey, S.
2012-01-01
Nothing is more vital to a spacecraft than power. Solar Electric Propulsion (SEP) uses that power to provide a safe, reliable, and, most importantly, fuel efficient means to propel a spacecraft to its destination. The power performance of an SEP vehicle s solar arrays and electrical power system (EPS) is largely influenced by the environment in which the spacecraft is operating. One of the most important factors that determines solar array power performance is how directly the arrays are pointed to the sun. To get the most power from the solar arrays, the obvious solution is to point them directly at the sun at all times. Doing so is not a problem in deep space, as the environment and pointing conditions that a spacecraft faces are fairly constant and are easy to accommodate, if necessary. However, large and sometimes rapid variations in environmental and pointing conditions are experienced by Earth orbiting spacecraft. SEP spacecraft also have the additional constraint of needing to keep the thrust vector aligned with the velocity vector. Thus, it is important to analyze solar array power performance for any vehicle that spends an extended amount of time orbiting the Earth, and to determine how much off-pointing can be tolerated to produce the required power for a given spacecraft. This paper documents the benefits and drawbacks of perfectly pointing the solar arrays of an SEP spacecraft spiraling from Earth orbit, and how this might be accomplished. Benefits and drawbacks are defined in terms of vehicle mass, power, volume, complexity, and cost. This paper will also look at the application of various solar array pointing methods to future missions. One such pointing method of interest is called roll steering . Roll steering involves rolling the entire vehicle twice each orbit. Roll steering, combined with solar array gimbal tracking, is used to point the solar arrays perfectly towards the sun at all points in the orbit, while keeping the vehicle thrusters aligned in the velocity direction. Roll steering is particularly attractive for a recently proposed mission that involves a spiral trajectory from low Earth orbit (LEO) to the Earth-Moon Lagrange Point 1 (E-M L1). During the spiral, the spacecraft will spend over 300 days experiencing the full spectrum of near-earth environments and solar array pointing conditions. An extensive study of the application of SEP (and roll steering) to this spiral mission is included, highlighting the ultimate goal of reduced vehicle cost and mass. Tools used for this analysis include the Systems Power Analysis for Capability Evaluation (Refs. 1 and 2) (SPACE) electrical power systems code, and SEP trajectory simulation tools developed at NASA Glenn Research Center.
Castigliego, Lorenzo; Armani, Andrea; Tinacci, Lara; Gianfaldoni, Daniela; Guidi, Alessandra
2015-01-01
Anglerfish (Lophius spp.) is consumed worldwide and is an important economic resource though its seven species are often fraudulently interchanged due to their different commercial value, especially when sold in the form of fillets or pieces. Molecular analysis is the only possible mean to verify traceability and counteract fraud. We developed two multiplex PCRs, one end-point and one real-time with melting curve post-amplification analysis, which can even be run with the simplest two-channel thermocyclers. The two methods were tested on seventy-five reference samples. Their specificity was checked in twenty more species of those most commonly available on the market and in other species of the Lophiidae family. Both methods, the choice of which depends on the equipment and budget of the lab, provide a rapid and easy-to-read response, improving both the simplicity and cost-effectiveness of existing methods for identifying Lophius species. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chirp-Z analysis for sol-gel transition monitoring.
Martinez, Loïc; Caplain, Emmanuel; Serfaty, Stéphane; Griesmar, Pascal; Gouedard, Gérard; Gindre, Marcel
2004-04-01
Gelation is a complex reaction that transforms a liquid medium into a solid one: the gel. In gel state, some gel materials (DMAP) have the singular property to ring in an audible frequency range when a pulse is applied. Before the gelation point, there is no transmission of slow waves observed; after the gelation point, the speed of sound in the gel rapidly increases from 0.1 to 10 m/s. The time evolution of the speed of sound can be measured, in frequency domain, by following the frequency spacing of the resonance peaks from the Synchronous Detection (SD) measurement method. Unfortunately, due to a constant frequency sampling rate, the relative error for low speeds (0.1 m/s) is 100%. In order to maintain a low constant relative error, in the whole speed time evolution range, Chirp-Z Transform (CZT) is used. This operation transforms a time variant signal to a time invariant one using only a time dependant stretching factor (S). In the frequency domain, the CZT enables us to stretch each collected spectrum from time signals. The blind identification of the S factor gives us the complete time evolution law of the speed of sound. Moreover, this method proves that the frequency bandwidth follows the same time law. These results point out that the minimum wavelength stays constant and that it only depends on the gel.
Time-dependent clustering analysis of the second BATSE gamma-ray burst catalog
NASA Technical Reports Server (NTRS)
Brainerd, J. J.; Meegan, C. A.; Briggs, Michael S.; Pendleton, G. N.; Brock, M. N.
1995-01-01
A time-dependent two-point correlation-function analysis of the Burst and Transient Source Experiment (BATSE) 2B catalog finds no evidence of burst repetition. As part of this analysis, we discuss the effects of sky exposure on the observability of burst repetition and present the equation describing the signature of burst repetition in the data. For a model of all burst repetition from a source occurring in less than five days we derive upper limits on the number of bursts in the catalog from repeaters and model-dependent upper limits on the fraction of burst sources that produce multiple outbursts.
NASA Technical Reports Server (NTRS)
Ottino, Julio M.
1991-01-01
Computer flow simulation aided by dynamical systems analysis is used to investigate the kinematics of time-periodic vortex shedding past a two-dimensional circular cylinder in the context of the following general questions: (1) Is a dynamical systems viewpoint useful in the understanding of this and similar problems involving time-periodic shedding behind bluff bodies; and (2) Is it indeed possible, by adopting such a point of view, to complement previous analyses or to understand kinematical aspects of the vortex shedding process that somehow remained hidden in previous approaches. We argue that the answers to these questions are positive. Results are described.
SPACEBAR: Kinematic design by computer graphics
NASA Technical Reports Server (NTRS)
Ricci, R. J.
1975-01-01
The interactive graphics computer program SPACEBAR, conceived to reduce the time and complexity associated with the development of kinematic mechanisms on the design board, was described. This program allows the direct design and analysis of mechanisms right at the terminal screen. All input variables, including linkage geometry, stiffness, and applied loading conditions, can be fed into or changed at the terminal and may be displayed in three dimensions. All mechanism configurations can be cycled through their range of travel and viewed in their various geometric positions. Output data includes geometric positioning in orthogonal coordinates of each node point in the mechanism, velocity and acceleration of the node points, and internal loads and displacements of the node points and linkages. All analysis calculations take at most a few seconds to complete. Output data can be viewed at the scope and also printed at the discretion of the user.
Steelmaking process control using remote ultraviolet atomic emission spectroscopy
NASA Astrophysics Data System (ADS)
Arnold, Samuel
Steelmaking in North America is a multi-billion dollar industry that has faced tremendous economic and environmental pressure over the past few decades. Fierce competition has driven steel manufacturers to improve process efficiency through the development of real-time sensors to reduce operating costs. In particular, much attention has been focused on end point detection through furnace off gas analysis. Typically, off-gas analysis is done with extractive sampling and gas analyzers such as Non-dispersive Infrared Sensors (NDIR). Passive emission spectroscopy offers a more attractive approach to end point detection as the equipment can be setup remotely. Using high resolution UV spectroscopy and applying sophisticated emission line detection software, a correlation was observed between metal emissions and the process end point during field trials. This correlation indicates a relationship between the metal emissions and the status of a steelmaking melt which can be used to improve overall process efficiency.
Sahu, P P
2008-02-10
A thermally tunable erbium-doped fiber amplifier (EDFA) gain equalizer filter based on compact point symmetric cascaded Mach-Zehnder (CMZ) coupler is presented with its mathematical model and is found to be polarization dependent due to stress anisotropy caused by local heating for thermo-optic phase change from its mathematical analysis. A thermo-optic delay line structure with a stress releasing groove is proposed and designed for the reduction of polarization dependent characteristics of the high index contrast point symmetric delay line structure of the device. It is found from thermal analysis by using an implicit finite difference method that temperature gradients of the proposed structure, which mainly causes the release of stress anisotropy, is approximately nine times more than that of the conventional structure. It is also seen that the EDFA gain equalized spectrum by using the point symmetric CMZ device based on the proposed structure is almost polarization independent.
Now is the time to demand change to punishing FtP procedures.
Mason, Sharon
2016-05-25
Having undergone Nursing and Midwifery Council fitness to practise (FtP) proceedings after raising and escalating concerns, I read with interest your article about nurses facing FtP hearings being 'pushed to breaking point' (analysis, May 11).
Managing Conflict for Productive Results: A Critical Leadership Skill.
ERIC Educational Resources Information Center
Simerly, Robert G.
1998-01-01
Describes sources of conflict in organizations and five effective management strategies: identify points of view, let parties articulate what they want, buy time, attempt negotiation, and ask parties to agree to arbitration. Provides a conflict management analysis sheet. (SK)
Research on fully distributed optical fiber sensing security system localization algorithm
NASA Astrophysics Data System (ADS)
Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen
2013-12-01
A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.
Measurement of Dam Deformations: Case Study of Obruk Dam (Turkey)
NASA Astrophysics Data System (ADS)
Gulal, V. Engin; Alkan, R. Metin; Alkan, M. Nurullah; İlci, Veli; Ozulu, I. Murat; Tombus, F. Engin; Kose, Zafer; Aladogan, Kayhan; Sahin, Murat; Yavasoglu, Hakan; Oku, Guldane
2016-04-01
In the literature, there is information regarding the first deformation and displacement measurements in dams that were conducted in 1920s Switzerland. Todays, deformation measurements in the dams have gained very different functions with improvements in both measurement equipment and evaluation of measurements. Deformation measurements and analysis are among the main topics studied by scientists who take interest in the engineering measurement sciences. The Working group of Deformation Measurements and Analysis, which was established under the International Federation of Surveyors (FIG), carries out its studies and activities with regard to this subject. At the end of the 1970s, the subject of the determination of fixed points in the deformation monitoring network was one of the main subjects extensively studied. Many theories arose from this inquiry, as different institutes came to differing conclusions. In 1978, a special commission with representatives of universities has been established within the FIG 6.1 working group; this commission worked on the issue of determining a general approach to geometric deformation analysis. The results gleaned from the commission were discussed at symposiums organized by the FIG. In accordance with these studies, scientists interested in the subject have begun to work on models that investigate cause and effect relations between the effects that cause deformation and deformation. As of the scientist who interest with the issue focused on different deformation methods, another special commission was established within the FIG engineering measurements commission in order to classify deformation models and study terminology. After studying this material for a long time, the official commission report was published in 2001. In this prepared report, studies have been carried out by considering the FIG Engineering Surveying Commission's report entitled, 'MODELS AND TERMINOLOGY FOR THE ANALYSIS OF GEODETIC MONITORING OBSERVATIONS'. In October of 2015, geodetic deformation measurements were conducted by considering FIG reports related to deformation measurements and German DIN 18710 Engineering Measurements norms in the Çorum province of Turkey. The main purpose of the study is to determine optimum measurement and evaluation methods that will be used to specify movements in the horizontal and vertical directions for the fill dam. For this purpose; • In reference networks consisting of 8 points, measurements were performed by using long-term dual-frequency GNSS receivers for duration of 8 hours. • GNSS measurements were conducted in varying times between 30 minutes and 120 minutes at the 44 units object points on the body of the dam. • Two repetitive measurements of real time kinematic (RTK) GNSS were conducted at the object points on dam. • Geometric leveling measurements were performed between reference and object points. • Trigonometric leveling measurements were performed between reference and object points. • Polar measurements were performed between references and object points. GNSS measurements performed at reference points of the monitoring network for 8 hours have been evaluated by using GAMIT software in accordance with the IGS points in the region. In this manner, regional and local movements in the network can be determined. It is aimed to determine measurement period which will provide 1-2mm accuracy that expected in local GNSS network by evaluating GNSS measurements performed on body of dam. Results will be compared by offsetting GNSS and terrestrial measurements. This study will investigate whether or not there is increased accuracy provided by GNSS measurements carried out among reference points without the possibility of vision.
Managing Complexity in Evidence Analysis: A Worked Example in Pediatric Weight Management.
Parrott, James Scott; Henry, Beverly; Thompson, Kyle L; Ziegler, Jane; Handu, Deepa
2018-05-02
Nutrition interventions are often complex and multicomponent. Typical approaches to meta-analyses that focus on individual causal relationships to provide guideline recommendations are not sufficient to capture this complexity. The objective of this study is to describe the method of meta-analysis used for the Pediatric Weight Management (PWM) Guidelines update and provide a worked example that can be applied in other areas of dietetics practice. The effects of PWM interventions were examined for body mass index (BMI), body mass index z-score (BMIZ), and waist circumference at four different time periods. For intervention-level effects, intervention types were identified empirically using multiple correspondence analysis paired with cluster analysis. Pooled effects of identified types were examined using random effects meta-analysis models. Differences in effects among types were examined using meta-regression. Context-level effects are examined using qualitative comparative analysis. Three distinct types (or families) of PWM interventions were identified: medical nutrition, behavioral, and missing components. Medical nutrition and behavioral types showed statistically significant improvements in BMIZ across all time points. Results were less consistent for BMI and waist circumference, although four distinct patterns of weight status change were identified. These varied by intervention type as well as outcome measure. Meta-regression indicated statistically significant differences between the medical nutrition and behavioral types vs the missing component type for both BMIZ and BMI, although the pattern varied by time period and intervention type. Qualitative comparative analysis identified distinct configurations of context characteristics at each time point that were consistent with positive outcomes among the intervention types. Although analysis of individual causal relationships is invaluable, this approach is inadequate to capture the complexity of dietetics practice. An alternative approach that integrates intervention-level with context-level meta-analyses may provide deeper understanding in the development of practice guidelines. Copyright © 2018 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Statistical image quantification toward optimal scan fusion and change quantification
NASA Astrophysics Data System (ADS)
Potesil, Vaclav; Zhou, Xiang Sean
2007-03-01
Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.
Phase space analysis for a scalar-tensor model with kinetic and Gauss-Bonnet couplings
NASA Astrophysics Data System (ADS)
Granda, L. N.; Loaiza, E.
2016-09-01
We study the phase space for a scalar-tensor string inspired model of dark energy with nonminimal kinetic and Gauss-Bonnet couplings. The form of the scalar potential and of the coupling terms is of the exponential type, which gives rise to appealing cosmological solutions. The critical points describe a variety of cosmological scenarios that go from a matter or radiation dominated universe to a dark energy dominated universe. Trajectories were found in the phase space departing from unstable or saddle fixed points and arriving at the stable scalar field dominated point corresponding to late-time accelerated expansion.
Error analysis of high-rate GNSS precise point positioning for seismic wave measurement
NASA Astrophysics Data System (ADS)
Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan
2017-06-01
High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.
Modelling short time series in metabolomics: a functional data analysis approach.
Montana, Giovanni; Berk, Maurice; Ebbels, Tim
2011-01-01
Metabolomics is the study of the complement of small molecule metabolites in cells, biofluids and tissues. Many metabolomic experiments are designed to compare changes observed over time under two or more experimental conditions (e.g. a control and drug-treated group), thus producing time course data. Models from traditional time series analysis are often unsuitable because, by design, only very few time points are available and there are a high number of missing values. We propose a functional data analysis approach for modelling short time series arising in metabolomic studies which overcomes these obstacles. Our model assumes that each observed time series is a smooth random curve, and we propose a statistical approach for inferring this curve from repeated measurements taken on the experimental units. A test statistic for detecting differences between temporal profiles associated with two experimental conditions is then presented. The methodology has been applied to NMR spectroscopy data collected in a pre-clinical toxicology study.
PLATSIM: An efficient linear simulation and analysis package for large-order flexible systems
NASA Technical Reports Server (NTRS)
Maghami, Periman; Kenny, Sean P.; Giesy, Daniel P.
1995-01-01
PLATSIM is a software package designed to provide efficient time and frequency domain analysis of large-order generic space platforms implemented with any linear time-invariant control system. Time domain analysis provides simulations of the overall spacecraft response levels due to either onboard or external disturbances. The time domain results can then be processed by the jitter analysis module to assess the spacecraft's pointing performance in a computationally efficient manner. The resulting jitter analysis algorithms have produced an increase in speed of several orders of magnitude over the brute force approach of sweeping minima and maxima. Frequency domain analysis produces frequency response functions for uncontrolled and controlled platform configurations. The latter represents an enabling technology for large-order flexible systems. PLATSIM uses a sparse matrix formulation for the spacecraft dynamics model which makes both the time and frequency domain operations quite efficient, particularly when a large number of modes are required to capture the true dynamics of the spacecraft. The package is written in MATLAB script language. A graphical user interface (GUI) is included in the PLATSIM software package. This GUI uses MATLAB's Handle graphics to provide a convenient way for setting simulation and analysis parameters.
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
Ta, Van M; Juon, Hee-Soon; Gielen, Andrea C; Steinwachs, Donald; McFarlane, Elizabeth; Duggan, Anne
2009-02-01
This longitudinal study examined racial differences in depressive symptoms at three time points among Asian, Native Hawaiian/Other Pacific Islander (NHOPI) and white mothers at-risk for child maltreatment (n = 616). The proportion of mothers with depressive symptoms ranged from 28 to 35% at all time points. Adjusted analyses revealed that Asian and NHOPI mothers were significantly more likely than white mothers to have depressive symptoms but this disparity was present only among families at mild/moderate risk for child maltreatment. Future research should identify ways to reduce this disparity and involve the Asian and NHOPI communities in prevention and treatment program design and implementation.
Okomo-Adhiambo, Margaret; Beattie, Craig; Rink, Anette
2006-01-01
Toxoplasma gondii induces the expression of proinflammatory cytokines, reorganizes organelles, scavenges nutrients, and inhibits apoptosis in infected host cells. We used a cDNA microarray of 420 annotated porcine expressed sequence tags to analyze the molecular basis of these changes at eight time points over a 72-hour period in porcine kidney epithelial (PK13) cells infected with T. gondii. A total of 401 genes with Cy3 and Cy5 spot intensities of ≥500 were selected for analysis, of which 263 (65.6%) were induced ≥2-fold (expression ratio, ≥2.0; P ≤ 0.05 [t test]) over at least one time point and 48 (12%) were significantly down-regulated. At least 12 functional categories of genes were modulated (up- or down-regulated) by T. gondii. The majority of induced genes were clustered as transcription, signal transduction, host immune response, nutrient metabolism, and apoptosis related. The expression of selected genes altered by T. gondii was validated by quantitative real-time reverse transcription-PCR. These results suggest that significant changes in gene expression occur in response to T. gondii infection in PK13 cells, facilitating further analysis of host-pathogen interactions in toxoplasmosis in a secondary host. PMID:16790800
Jacups, Susan P; Whelan, Peter I; Harley, David
2011-03-01
Ross River virus (RRV) causes the most common human arbovirus disease in Australia. Although the disease is nonfatal, the associated arthritis and postinfection fatigue can be debilitating for many months, impacting on workforce participation. We sought to create an early-warning system to notify of approaching RRV disease outbreak conditions for major townships in the Northern Territory. By applying a logistic regression model to meteorologic factors, including rainfall, a postestimation analysis of sensitivity and specificity can create rainfall cut-points. These rainfall cut-points indicate the rainfall level above which previous epidemic conditions have occurred. Furthermore, rainfall cut-points indirectly adjust for vertebrate host data from the agile wallaby (Macropus agilis) as the life cycle of the agile wallaby is intricately meshed with the wet season. Once generated, cut-points can thus be used prospectively to allow timely implementation of larval survey and control measures and public health warnings to preemptively reduce RRV disease incidence. Cut-points are location specific and have the capacity to replace previously used models, which require data management and input, and rarely provide timely notification for vector control requirements and public health warnings. These methods can be adapted for use elsewhere.
DORIS-based point mascons for the long term stability of precise orbit solutions
NASA Astrophysics Data System (ADS)
Cerri, L.; Lemoine, J. M.; Mercier, F.; Zelensky, N. P.; Lemoine, F. G.
2013-08-01
In recent years non-tidal Time Varying Gravity (TVG) has emerged as the most important contributor in the error budget of Precision Orbit Determination (POD) solutions for altimeter satellites' orbits. The Gravity Recovery And Climate Experiment (GRACE) mission has provided POD analysts with static and time-varying gravity models that are very accurate over the 2002-2012 time interval, but whose linear rates cannot be safely extrapolated before and after the GRACE lifespan. One such model based on a combination of data from GRACE and Lageos from 2002-2010, is used in the dynamic POD solutions developed for the Geophysical Data Records (GDRs) of the Jason series of altimeter missions and the equivalent products from lower altitude missions such as Envisat, Cryosat-2, and HY-2A. In order to accommodate long-term time-variable gravity variations not included in the background geopotential model, we assess the feasibility of using DORIS data to observe local mass variations using point mascons. In particular, we show that the point-mascon approach can stabilize the geographically correlated orbit errors which are of fundamental interest for the analysis of regional Mean Sea Level trends based on altimeter data, and can therefore provide an interim solution in the event of GRACE data loss. The time series of point-mass solutions for Greenland and Antarctica show good agreement with independent series derived from GRACE data, indicating a mass loss at rate of 210 Gt/year and 110 Gt/year respectively.
Lavender, Jason M.; Utzinger, Linsey M.; Cao, Li; Wonderlich, Stephen A.; Engel, Scott G.; Mitchell, James E.; Crosby, Ross D.
2016-01-01
Although negative affect (NA) has been identified as a common trigger for bulimic behaviors, findings regarding NA following such behaviors have been mixed. This study examined reciprocal associations between NA and bulimic behaviors using real-time, naturalistic data. Participants were 133 women with DSM-IV bulimia nervosa (BN) who completed a two-week ecological momentary assessment (EMA) protocol in which they recorded bulimic behaviors and provided multiple daily ratings of NA. A multilevel autoregressive cross-lagged analysis was conducted to examine concurrent, first-order autoregressive, and prospective associations between NA, binge eating, and purging across the day. Results revealed positive concurrent associations between all variables across all time points, as well as numerous autoregressive associations. For prospective associations, higher NA predicted subsequent bulimic symptoms at multiple time points; conversely, binge eating predicted lower NA at multiple time points, and purging predicted higher NA at one time point. Several autoregressive and prospective associations were also found between binge eating and purging. This study used a novel approach to examine NA in relation to bulimic symptoms, contributing to the existing literature by directly examining the magnitude of the associations, examining differences in the associations across the day, and controlling for other associations in testing each effect in the model. These findings may have relevance for understanding the etiology and/or maintenance of bulimic symptoms, as well as potentially informing psychological interventions for BN. PMID:26692122
Dynamical systems analysis of phantom dark energy models
NASA Astrophysics Data System (ADS)
Roy, Nandan; Bhadra, Nivedita
2018-06-01
In this work, we study the dynamical systems analysis of phantom dark energy models considering five different potentials. From the analysis of these five potentials we have found a general parametrization of the scalar field potentials which is obeyed by many other potentials. Our investigation shows that there is only one fixed point which could be the beginning of the universe. However, future destiny has many possible options. A detailed numerical analysis of the system has been presented. The observed late time behaviour in this analysis shows very good agreement with the recent observations.
Assessing criticality in seismicity by entropy
NASA Astrophysics Data System (ADS)
Goltz, C.
2003-04-01
There is an ongoing discussion whether the Earth's crust is in a critical state and whether this state is permanent or intermittent. Intermittent criticality would allow specification of time-dependent hazard in principle. Analysis of a spatio-temporally evolving synthetic critical point phenomenon and of real seismicity using configurational entropy shows that the method is a suitable approach for the characterisation of critical point dynamics. Results obtained rather support the notion of intermittent criticality in earthquakes. Statistical significance of the findings is assessed by the method of surrogate data.
Atmospheric Characterization During Super-Resolution Vision System Developmental Testing
2013-05-01
local time each day of the test. RM Young 81000 sonic anemometers were located at 0-, 800-, and 1800-m target points at 1.5-m elevation to provide point...estimates of C2n. Sonic anemometer data were also collected at a 0-km tower at several levels, providing a vertical turbulence profile. Turbulence...Atmospheric Instrumentation and Analysis 8 4. Estimation of C2n from Sonic Anemometer Data 11 5. Data Plots 14 6. Derived Results 32 7. Conclusions 36 8
Song, Ji Soo; Kwak, Hyo Sung; Byon, Jung Hee; Jin, Gong Yong
2017-05-01
To compare the apparent diffusion coefficient (ADC) of upper abdominal organs acquired at different time points, and to investigate the usefulness of normalization. We retrospectively evaluated 58 patients who underwent three rounds of magnetic resonance (MR) imaging including diffusion-weighted imaging of the upper abdomen. MR examinations were performed using three different 3.0 Tesla (T) and one 1.5T systems, with variable b value combinations and respiratory motion compensation techniques. The ADC values of the upper abdominal organs from three different time points were analyzed, using the ADC values of the paraspinal muscle (ADC psm ) and spleen (ADC spleen ) for normalization. Intraclass correlation coefficients (ICC) and comparison of dependent ICCs were used for statistical analysis. The ICCs of the original ADC and ADC psm showed fair to substantial agreement, while ADC spleen showed substantial to almost perfect agreement. The ICC of ADC spleen of all anatomical regions showed less variability compared with that of the original ADC (P < 0.005). Normalized ADC using the spleen as a reference organ significantly decreased variability in measurement of the upper abdominal organs in different MR systems at different time points and could be regarded as an imaging biomarker for future multicenter, longitudinal studies. 5 J. MAGN. RESON. IMAGING 2017;45:1494-1501. © 2016 International Society for Magnetic Resonance in Medicine.
Generation of High Frequency Response in a Dynamically Loaded, Nonlinear Soil Column
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, Robert Edward; Coleman, Justin Leigh
2015-08-01
Detailed guidance on linear seismic analysis of soil columns is provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998),” which is currently under revision. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain analysis which includes evaluation of soil columns. When performing linear analysis, a given soil column is typically evaluated with a linear, viscous damped constitutive model. When submitted to a sine wave motion, this constitutive model produces a smooth hysteresis loop. For nonlinear analysis, the soil column can be modelled with an appropriate nonlinear hysteretic soilmore » model. For the model in this paper, the stiffness and energy absorption result from a defined post yielding shear stress versus shear strain curve. This curve is input with tabular data points. When submitted to a sine wave motion, this constitutive model produces a hysteresis loop that looks similar in shape to the input tabular data points on the sides with discontinuous, pointed ends. This paper compares linear and nonlinear soil column results. The results show that the nonlinear analysis produces additional high frequency response. The paper provides additional study to establish what portion of the high frequency response is due to numerical noise associated with the tabular input curve and what portion is accurately caused by the pointed ends of the hysteresis loop. Finally, the paper shows how the results are changed when a significant structural mass is added to the top of the soil column.« less
The role of P2X3 receptors in bilateral masseter muscle allodynia in rats
Tariba Knežević, Petra; Vukman, Robert; Antonić, Robert; Kovač, Zoran; Uhač, Ivone; Simonić-Kocijan, Sunčana
2016-01-01
Aim To determine the relationship between bilateral allodynia induced by masseter muscle inflammation and P2X3 receptor expression changes in trigeminal ganglia (TRG) and the influence of intramasseteric P2X3 antagonist administration on bilateral masseter allodynia. Methods To induce bilateral allodynia, rats received a unilateral injection of complete Freund’s adjuvant (CFA) into the masseter muscle. Bilateral head withdrawal threshold (HWT) was measured 4 days later. Behavioral measurements were followed by bilateral masseter muscle and TRG dissection. Masseter tissue was evaluated histopathologically and TRG tissue was analyzed for P2X3 receptor mRNA expression by using quantitative real-time polymerase chain reaction (PCR) analysis. To assess the P2X3 receptor involvement in nocifensive behavior, two doses (6 and 60 μg/50 μL) of selective P2X3 antagonist A-317491 were administrated into the inflamed masseter muscle 4 days after the CFA injection. Bilateral HWT was measured at 15-, 30-, 60-, and 120-minute time points after A-317491 administration. Results HWT was bilaterally reduced after the CFA injection (P < 0.001). Intramasseteric inflammation was confirmed ipsilaterally to the CFA injection. Quantitative real-time PCR analysis demonstrated enhanced P2X3 expression in TRG ipsilaterally to CFA administration (P < 0.01). In comparison with controls, the dose of 6 μg of A-317491 significantly increased bilateral HWT at 15-, 30-, and 60-minute time points after the A-317491 administration (P < 0.001), whereas the dose of 60 μg of A-317491 was efficient at all time points ipsilaterally (P = 0.004) and at 15-, 30-, and 60-minute time points contralaterally (P < 0.001). Conclusion Unilateral masseter inflammation can induce bilateral allodynia in rats. The study provided evidence that P2X3 receptors can functionally influence masseter muscle allodynia and suggested that P2X3 receptors expressed in TRG neurons are involved in masseter inflammatory pain conditions. PMID:28051277
Qu, Xiancheng; Hu, Menghong; Shang, Yueyong; Pan, Lisha; Jia, Peixuan; Fu, Chunxue; Liu, Qigen; Wang, Youji
2018-01-01
Next-generation sequencing was used to analyze the effects of toxic microcystin-LR (MC-LR) on silver carp (Hypophthalmichthys molitrix). Silver carps were intraperitoneally injected with MC-LR, and RNA-seq and miRNA-seq in the liver were analyzed at 0.25, 0.5, and 1 h. The expression of glutathione S-transferase (GST), which acts as a marker gene for MC-LR, was tested to determine the earliest time point at which GST transcription was initiated in the liver tissues of the MC-LR-treated silver carps. Hepatic RNA-seq/miRNA-seq analysis and data integration analysis were conducted with reference to the identified time point. Quantitative PCR (qPCR) was performed to detect the expression of the following genes at the three time points: heme oxygenase 1 (HO-1), interleukin-10 receptor 1 (IL-10R1), apolipoprotein A-I (apoA-I), and heme binding protein 2 (HBP2). Results showed that the liver GST expression was remarkably decreased at 0.25 h (P < 0.05). RNA-seq at this time point revealed that the liver tissue contained 97,505 unigenes, including 184 significantly different unigenes and 75 unknown genes. Gene Ontology (GO) term enrichment analysis suggested that 35 of the 145 enriched GO terms were significantly enriched and mainly related to the immune system regulation network. KEGG pathway enrichment analysis showed that 18 of the 189 pathways were significantly enriched, and the most significant was a ribosome pathway containing 77 differentially expressed genes. miRNA-seq analysis indicated that the longest miRNA had 22 nucleotides (nt), followed by 21 and 23 nt. A total of 286 known miRNAs, 332 known miRNA precursor sequences, and 438 new miRNAs were predicted. A total of 1,048,575 mRNA–miRNA interaction sites were obtained, and 21,252 and 21,241 target genes were respectively predicted in known and new miRNAs. qPCR revealed that HO-1, IL-10R1, apoA-I, and HBP2 were significantly differentially expressed and might play important roles in the toxicity and liver detoxification of MC-LR in fish. These results were consistent with those of high-throughput sequencing, thereby verifying the accuracy of our sequencing data. RNA-seq and miRNA-seq analyses of silver carp liver injected with MC-LR provided valuable and new insights into the toxic effects of MC-LR and the antitoxic mechanisms of MC-LR in fish. The RNA/miRNA data are available from the NCBI database Registration No. : SRP075165. PMID:29692738
Ang, Joo Ern; Revell, Victoria; Mann, Anuska; Mäntele, Simone; Otway, Daniella T; Johnston, Jonathan D; Thumser, Alfred E; Skene, Debra J; Raynaud, Florence
2012-08-01
Although daily rhythms regulate multiple aspects of human physiology, rhythmic control of the metabolome remains poorly understood. The primary objective of this proof-of-concept study was identification of metabolites in human plasma that exhibit significant 24-h variation. This was assessed via an untargeted metabolomic approach using liquid chromatography-mass spectrometry (LC-MS). Eight lean, healthy, and unmedicated men, mean age 53.6 (SD ± 6.0) yrs, maintained a fixed sleep/wake schedule and dietary regime for 1 wk at home prior to an adaptation night and followed by a 25-h experimental session in the laboratory where the light/dark cycle, sleep/wake, posture, and calorific intake were strictly controlled. Plasma samples from each individual at selected time points were prepared using liquid-phase extraction followed by reverse-phase LC coupled to quadrupole time-of-flight MS analysis in positive ionization mode. Time-of-day variation in the metabolites was screened for using orthogonal partial least square discrimination between selected time points of 10:00 vs. 22:00 h, 16:00 vs. 04:00 h, and 07:00 (d 1) vs. 16:00 h, as well as repeated-measures analysis of variance with time as an independent variable. Subsequently, cosinor analysis was performed on all the sampled time points across the 24-h day to assess for significant daily variation. In this study, analytical variability, assessed using known internal standards, was low with coefficients of variation <10%. A total of 1069 metabolite features were detected and 203 (19%) showed significant time-of-day variation. Of these, 34 metabolites were identified using a combination of accurate mass, tandem MS, and online database searches. These metabolites include corticosteroids, bilirubin, amino acids, acylcarnitines, and phospholipids; of note, the magnitude of the 24-h variation of these identified metabolites was large, with the mean ratio of oscillation range over MESOR (24-h time series mean) of 65% (95% confidence interval [CI]: 49-81%). Importantly, several of these human plasma metabolites, including specific acylcarnitines and phospholipids, were hitherto not known to be 24-h variant. These findings represent an important baseline and will be useful in guiding the design and interpretation of future metabolite-based studies.
Point Analysis in Java applied to histological images of the perforant pathway: a user's account.
Scorcioni, Ruggero; Wright, Susan N; Patrick Card, J; Ascoli, Giorgio A; Barrionuevo, Germán
2008-01-01
The freeware Java tool Point Analysis in Java (PAJ), created to perform 3D point analysis, was tested in an independent laboratory setting. The input data consisted of images of the hippocampal perforant pathway from serial immunocytochemical localizations of the rat brain in multiple views at different resolutions. The low magnification set (x2 objective) comprised the entire perforant pathway, while the high magnification set (x100 objective) allowed the identification of individual fibers. A preliminary stereological study revealed a striking linear relationship between the fiber count at high magnification and the optical density at low magnification. PAJ enabled fast analysis for down-sampled data sets and a friendly interface with automated plot drawings. Noted strengths included the multi-platform support as well as the free availability of the source code, conducive to a broad user base and maximum flexibility for ad hoc requirements. PAJ has great potential to extend its usability by (a) improving its graphical user interface, (b) increasing its input size limit, (c) improving response time for large data sets, and (d) potentially being integrated with other Java graphical tools such as ImageJ.
Generation Mechanisms UV and X-ray Emissions During SL9 Impact
NASA Technical Reports Server (NTRS)
Waite, J. Hunter, Jr.
1997-01-01
The purpose of this grant was to study the ultraviolet and X-ray emissions associated with the impact of comet Shoemaker-Levy 9 with Jupiter. The University of Michigan task was primarily focused on theoretical calculations. The NAGW-4788 subtask was to be largely devoted to determining the constraints placed by the X-ray observations on the physical mechanisms responsible for the generation of the X-rays. Author summarized below the ROSAT observations and suggest a physical mechanism that can plausibly account for the observed emissions. It is hoped that the full set of activities can be completed at a later date. Further analysis of the ROSAT data acquired at the time of the impact was necessary to define the observational constraints on the magnetospheric-ionospheric processes involved in the excitation of the X-ray emissions associated with the fragment impacts. This analysis centered around improvements in the pointing accuracy and improvements in the timing information. Additional pointing information was made possible by the identification of the optical counterparts to the X-ray sources in the ROSAT field-of-view. Due to the large number of worldwide observers of the impacts, a serendipitous visible plate image from an observer in Venezuela provided a very accurate location of the present position of the X-ray source, virtually eliminating pointing errors in the data. Once refined, the pointing indicated that the two observed X-ray brightenings that were highly correlated in time with the K and P2 events were brightenings of the X-ray aurora (as identified in images prior to the impact).Appendix A "ROSAT observations of X-ray emissions from Jupiter during the impact of comet Shoemaker-Levy 9' also included.
Miniature near-infrared spectrometer for point-of-use chemical analysis
NASA Astrophysics Data System (ADS)
Friedrich, Donald M.; Hulse, Charles A.; von Gunten, Marc; Williamson, Eric P.; Pederson, Christopher G.; O'Brien, Nada A.
2014-03-01
Point-of-use chemical analysis holds tremendous promise for a number of industries, including agriculture, recycling, pharmaceuticals and homeland security. Near infrared (NIR) spectroscopy is an excellent candidate for these applications, with minimal sample preparation for real-time decision-making. We will detail the development of a golf ball-sized NIR spectrometer developed specifically for this purpose. The instrument is based upon a thin-film dispersive element that is very stable over time and temperature, with less than 2 nm change expected over the operating temperature range and lifetime of the instrument. This filter is coupled with an uncooled InGaAs detector array in a small, rugged, environmentally stable optical bench ideally suited to unpredictable environments. The resulting instrument weighs less than 60 grams, includes onboard illumination and collection optics for diffuse reflectance applications in the 900-1700 nm wavelength range, and is USB-powered. It can be driven in the field by a laptop, tablet or even a smartphone. The software design includes the potential for both on-board and cloud-based storage, analysis and decision-making. The key attributes of the instrument and the underlying design tradeoffs will be discussed, focusing on miniaturization, ruggedization, power consumption and cost. The optical performance of the instrument, as well as its fit-for purpose will be detailed. Finally, we will show that our manufacturing process has enabled us to build instruments with excellent unit-to-unit reproducibility. We will show that this is a key enabler for instrumentindependent chemical analysis models, a requirement for mass point-of-use deployment.
Rorie, David A; Rogers, Amy; Mackenzie, Isla S; Ford, Ian; Webb, David J; Willams, Bryan; Brown, Morris; Poulter, Neil; Findlay, Evelyn; Saywood, Wendy; MacDonald, Thomas M
2016-02-09
Nocturnal blood pressure (BP) appears to be a better predictor of cardiovascular outcome than daytime BP. The BP lowering effects of most antihypertensive therapies are often greater in the first 12 h compared to the next 12 h. The Treatment In Morning versus Evening (TIME) study aims to establish whether evening dosing is more cardioprotective than morning dosing. The TIME study uses the prospective, randomised, open-label, blinded end-point (PROBE) design. TIME recruits participants by advertising in the community, from primary and secondary care, and from databases of consented patients in the UK. Participants must be aged over 18 years, prescribed at least one antihypertensive drug taken once a day, and have a valid email address. After the participants have self-enrolled and consented on the secure TIME website (http://www.timestudy.co.uk) they are randomised to take their antihypertensive medication in the morning or the evening. Participant follow-ups are conducted after 1 month and then every 3 months by automated email. The trial is expected to run for 5 years, randomising 10,269 participants, with average participant follow-up being 4 years. The primary end point is hospitalisation for the composite end point of non-fatal myocardial infarction (MI), non-fatal stroke (cerebrovascular accident; CVA) or any vascular death determined by record-linkage. Secondary end points are: each component of the primary end point, hospitalisation for non-fatal stroke, hospitalisation for non-fatal MI, cardiovascular death, all-cause mortality, hospitalisation or death from congestive heart failure. The primary outcome will be a comparison of time to first event comparing morning versus evening dosing using an intention-to-treat analysis. The sample size is calculated for a two-sided test to detect 20% superiority at 80% power. TIME has ethical approval in the UK, and results will be published in a peer-reviewed journal. UKCRN17071; Pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Park, Yong-Beom; Ha, Chul-Won; Cho, Sung-Do; Lee, Myung-Chul; Lee, Ju-Hong; Seo, Seung-Suk; Kang, Seung-Baik; Kyung, Hee-Soo; Choi, Choong-Hyeok; Chang, NaYoon; Rhim, Hyou Young Helen; Bin, Seong-Il
2015-01-01
To evaluate the relative efficacy and safety of extended-release tramadol HCl 75 mg/acetaminophen 650 mg (TA-ER) and immediate-release tramadol HCl 37.5 mg/acetaminophen 325 mg (TA-IR) for the treatment of moderate to severe acute pain following total knee replacement. This phase III, double-blind, placebo-controlled, parallel-group study randomized 320 patients with moderate to severe pain (≥4 intensity on an 11 point numeric rating scale) following total knee replacement arthroplasty to receive oral TA-ER (every 12 hours) or TA-IR (every 6 hours) over a period of 48 hours. In the primary analysis, TA-ER was evaluated for efficacy non-inferior to that of TA-IR based on the sum of pain intensity difference (SPID) at 48 hours after the first dose of study drug (SPID48). Secondary endpoints included SPID at additional time points, total pain relief at all on-therapy time points (TOTPAR), sum of SPID and TOTPAR at all on-therapy time points (SPID + TOTPAR), use of rescue medication, subjective pain assessment (PGIC, Patient Global Impression of Change), and adverse events (AEs). Analysis of the primary efficacy endpoint (SPID48) could not establish the non-inferiority of TA-ER to TA-IR. However, a post hoc analysis with a re-defined non-inferiority margin did demonstrate the non-inferiority of TA-ER to TA-IR. No statistically significant difference in SPID at 6, 12, or 24 hours was observed between the TA-ER and TA-IR groups. Similarly, analysis of TOTPAR showed that there were no significant differences between groups at any on-therapy time point, and SPID + TOTPAR at 6 and 48 hours were similar among groups. There was no difference in the mean frequency or dosage of rescue medication required by both groups, and the majority of patients in both the TA-ER and TA-IR groups rated their pain improvement as 'much' or 'somewhat better'. The overall incidence of ≥1 AEs was similar among the TA-ER (88.8%) and TA-IR (89.5%) groups. The most commonly reported AEs by patients treated with TA-ER and TA-IR included nausea (49.7% vs 44.4%), vomiting (28.0% vs 24.2%), and decreased hemoglobin (23.6% vs 26.1%). This study is limited by the lack of placebo control, and the invalidity of the initial non-inferiority margin. This study demonstrated that the analgesic effect of TA-ER is non-inferior to TA-IR, and supports TA-ER as an effective and safe treatment for moderate to severe acute pain post total knee replacement. Clinicaltrials.gov, NCT01814878.
ERIC Educational Resources Information Center
King, Thomas; McKean, Cristina; Rush, Robert; Westrupp, Elizabeth M.; Mensah, Fiona K.; Reilly, Sheena; Law, James
2017-01-01
Maternal education captured at a single time point is commonly employed as a predictor of a child's cognitive development. In this article, we ask what bearing the acquisition of additional qualifications has upon reading performance in middle childhood. This was a secondary analysis of the United Kingdom's Millennium Cohort Study, a cohort of…
ERIC Educational Resources Information Center
Lam, Shui-fong; Law, Wilbert; Chan, Chi-Keung; Wong, Bernard P. H.; Zhang, Xiao
2015-01-01
The contribution of social context to school bullying was examined from the self-determination theory perspective in this longitudinal study of 536 adolescents from 3 secondary schools in Hong Kong. Latent class growth analysis of the student-reported data at 5 time points from grade 7 to grade 9 identified 4 groups of students: bullies (9.8%),…
The Importance and Role of Intracluster Correlations in Planning Cluster Trials
Preisser, John S.; Reboussin, Beth A.; Song, Eun-Young; Wolfson, Mark
2008-01-01
There is increasing recognition of the critical role of intracluster correlations of health behavior outcomes in cluster intervention trials. This study examines the estimation, reporting, and use of intracluster correlations in planning cluster trials. We use an estimating equations approach to estimate the intracluster correlations corresponding to the multiple-time-point nested cross-sectional design. Sample size formulae incorporating 2 types of intracluster correlations are examined for the purpose of planning future trials. The traditional intracluster correlation is the correlation among individuals within the same community at a specific time point. A second type is the correlation among individuals within the same community at different time points. For a “time × condition” analysis of a pretest–posttest nested cross-sectional trial design, we show that statistical power considerations based upon a posttest-only design generally are not an adequate substitute for sample size calculations that incorporate both types of intracluster correlations. Estimation, reporting, and use of intracluster correlations are illustrated for several dichotomous measures related to underage drinking collected as part of a large nonrandomized trial to enforce underage drinking laws in the United States from 1998 to 2004. PMID:17879427
Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models
Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin
2017-01-01
In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384
NASA Astrophysics Data System (ADS)
Brokešová, Johana; Málek, Jiří
2018-07-01
A new method for representing seismograms by using zero-crossing points is described. This method is based on decomposing a seismogram into a set of quasi-harmonic components and, subsequently, on determining the precise zero-crossing times of these components. An analogous approach can be applied to determine extreme points that represent the zero-crossings of the first time derivative of the quasi-harmonics. Such zero-crossing and/or extreme point seismogram representation can be used successfully to reconstruct single-station seismograms, but the main application is to small-aperture array data analysis to which standard methods cannot be applied. The precise times of the zero-crossing and/or extreme points make it possible to determine precise time differences across the array used to retrieve the parameters of a plane wave propagating across the array, namely, its backazimuth and apparent phase velocity along the Earth's surface. The applicability of this method is demonstrated using two synthetic examples. In the real-data example from the Příbram-Háje array in central Bohemia (Czech Republic) for the Mw 6.4 Crete earthquake of October 12, 2013, this method is used to determine the phase velocity dispersion of both Rayleigh and Love waves. The resulting phase velocities are compared with those obtained by employing the seismic plane-wave rotation-to-translation relations. In this approach, the phase velocity is calculated by obtaining the amplitude ratios between the rotation and translation components. Seismic rotations are derived from the array data, for which the small aperture is not only an advantage but also an applicability condition.
A similarity hypothesis for the two-point correlation tensor in a temporally evolving plane wake
NASA Technical Reports Server (NTRS)
Ewing, D. W.; George, W. K.; Moser, R. D.; Rogers, M. M.
1995-01-01
The analysis demonstrated that the governing equations for the two-point velocity correlation tensor in the temporally evolving wake admit similarity solutions, which include the similarity solutions for the single-point moment as a special case. The resulting equations for the similarity solutions include two constants, beta and Re(sub sigma), that are ratios of three characteristic time scales of processes in the flow: a viscous time scale, a time scale characteristic of the spread rate of the flow, and a characteristic time scale of the mean strain rate. The values of these ratios depend on the initial conditions of the flow and are most likely measures of the coherent structures in the initial conditions. The occurrences of these constants in the governing equations for the similarity solutions indicates that these solutions, in general, will only be the same for two flows if these two constants are equal (and hence the coherent structures in the flows are related). The comparisons between the predictions of the similarity hypothesis and the data presented here and elsewhere indicate that the similarity solutions for the two-point correlation tensors provide a good approximation of the measures of those motions that are not significantly affected by the boundary conditions caused by the finite extent of real flows. Thus, the two-point similarity hypothesis provides a useful tool for both numerical and physical experimentalist that can be used to examine how the finite extent of real flows affect the evolution of the different scales of motion in the flow.
Zhao, Dong; Wang, Tian-long; Pan, Fang; Zhao, Lei; Zhang, Lian-feng; Yang, Ba-xian
2006-08-18
To investigate the changes in hemodynamics and oxygen metabolism of different Child-grade patients during orthotopic liver transplantation (OLT) without veno-venous bypass. Forty patients with end-stage liver disease undergoing non veno-venous OLT under general anesthesia were enrolled in this research. Swan-Ganz catheter was placed in the pulmonary artery via right internal jugular vein and right radial artery was cannulated to monitor mean pulmonary artery pressure (mPAP) and artery blood pressure (ABP) continuously. Pulmonary capillary wedge pressure (PCWP) and central venous pressure (CVP) were also recorded. Cardiac output (CO) was recorded at several time points, such as, 30 min after induction (T1), when inferior vena cava and portal vein were clamped (T2), 30 min after portal vein was clamped (T3), 10 min after unclamping of portal vein (T4), 60 min after graft reperfusion (T5) and at the end of the operation (T6). Blood samples were taken from radial and pulmonary artery for blood gas analysis and hemodynamic parameters, such as, cardiac index (CI), stroke volume index (SVI), pulmonary vascular resistance index (PVRI), and system vascular resistance index (SVRI); oxygen delivery (DO2) and oxygen consumption (VO2) were also calculated at these time points. (1) The mPAP values were much higher in group C than in group A or B at all time points. CVP was significantly increased at T1 or T2 in group C as compared with those points of Child's B or C. PCWP was increased significantly after unclamping of portal vein in all three groups and was much higher at several points in Child's C than in Child's A or B. The SVRI value of T1 and the PVRI value of T3 were much lower in group C than those points in group A and the value of SVRI/PVRI was less than normal except at T3 point. And blood gas analysis elucidated that PaO2 was higher than 400 mm Hg at any points. (2) Oxygen consumption was significantly decreased during the operation due to less blood supply and was reverted to normal at the end point of the operation in all patients. Oxygen delivery was all at least 1,000 mL/min during OLT and there was no significant difference between different groups or different points. The hemodynamic state of high cardiac output with low peripheral resistance deteriorated when patients' Child-grade shifted from A to C. VO2 was less than normal value during OLT until the end point.
Understanding cracking failures of coatings: A fracture mechanics approach
NASA Astrophysics Data System (ADS)
Kim, Sung-Ryong
A fracture mechanics analysis of coating (paint) cracking was developed. A strain energy release rate (G(sub c)) expression due to the formation of a new crack in a coating was derived for bending and tension loadings in terms of the moduli, thicknesses, Poisson's ratios, load, residual strain, etc. Four-point bending and instrumented impact tests were used to determine the in-situ fracture toughness of coatings as functions of increasing baking (drying) time. The system used was a thin coating layer on a thick substrate layer. The substrates included steel, aluminum, polycarbonate, acrylonitrile-butadiene-styrene (ABS), and Noryl. The coatings included newly developed automotive paints. The four-point bending configuration promoted nice transversed multiple coating cracks on both steel and polymeric substrates. The crosslinked type automotive coatings on steel substrates showed big cracks without microcracks. When theoretical predictions for energy release rate were compared to experimental data for coating/steel substrate samples with multiple cracking, the agreement was good. Crosslinked type coatings on polymeric substrates showed more cracks than theory predicted and the G(sub c)'s were high. Solvent evaporation type coatings on polymeric substrates showed clean multiple cracking and the G(sub c)'s were higher than those obtained by tension analysis of tension experiments with the same substrates. All the polymeric samples showed surface embrittlement after long baking times using four-point bending tests. The most apparent surface embrittlement was observed in the acrylonitrile-butadiene-styrene (ABS) substrate system. The impact properties of coatings as a function of baking time were also investigated. These experiments were performed using an instrumented impact tester. There was a rapid decrease in G(sub c) at short baking times and convergence to a constant value at long baking times. The surface embrittlement conditions and an embrittlement toughness were found upon impact loading. This analysis provides a basis for a quantitative approach to measuring coating toughness.
A novel mesh processing based technique for 3D plant analysis
2012-01-01
Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features. PMID:22553969
Lunar Surface Architecture Utilization and Logistics Support Assessment
NASA Astrophysics Data System (ADS)
Bienhoff, Dallas; Findiesen, William; Bayer, Martin; Born, Andrew; McCormick, David
2008-01-01
Crew and equipment utilization and logistics support needs for the point of departure lunar outpost as presented by the NASA Lunar Architecture Team (LAT) and alternative surface architectures were assessed for the first ten years of operation. The lunar surface architectures were evaluated and manifests created for each mission. Distances between Lunar Surface Access Module (LSAM) landing sites and emplacement locations were estimated. Physical characteristics were assigned to each surface element and operational characteristics were assigned to each surface mobility element. Stochastic analysis was conducted to assess probable times to deploy surface elements, conduct exploration excursions, and perform defined crew activities. Crew time is divided into Outpost-related, exploration and science, overhead, and personal activities. Outpost-related time includes element deployment, EVA maintenance, IVA maintenance, and logistics resupply. Exploration and science activities include mapping, geological surveys, science experiment deployment, sample analysis and categorizing, and physiological and biological tests in the lunar environment. Personal activities include sleeping, eating, hygiene, exercising, and time off. Overhead activities include precursor or close-out tasks that must be accomplished but don't fit into the other three categories such as: suit donning and doffing, airlock cycle time, suit cleaning, suit maintenance, post-landing safing actions, and pre-departure preparations. Equipment usage time, spares, maintenance actions, and Outpost consumables are also estimated to provide input into logistics support planning. Results are normalized relative to the NASA LAT point of departure lunar surface architecture.
``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis
NASA Astrophysics Data System (ADS)
Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin
Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.
Reverse engineering gene regulatory networks from measurement with missing values.
Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong
2016-12-01
Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence, more accurate prediction of the underlying GRN compared to when using the conventional Gaussian approximation (GA) filters ignoring the missing data points.
NASA Astrophysics Data System (ADS)
Tiwari, Pallavi; Danish, Shabbar; Wong, Stephen; Madabhushi, Anant
2013-03-01
Laser-induced interstitial thermal therapy (LITT) has recently emerged as a new, less invasive alternative to craniotomy for treating epilepsy; which allows for focussed delivery of laser energy monitored in real time by MRI, for precise removal of the epileptogenic foci. Despite being minimally invasive, the effects of laser ablation on the epileptogenic foci (reflected by changes in MR imaging markers post-LITT) are currently unknown. In this work, we present a quantitative framework for evaluating LITT-related changes by quantifying per-voxel changes in MR imaging markers which may be more reflective of local treatment related changes (TRC) that occur post-LITT, as compared to the standard volumetric analysis which involves monitoring a more global volume change across pre-, and post-LITT MRI. Our framework focuses on three objectives: (a) development of temporal MRI signatures that characterize TRC corresponding to patients with seizure freedom by comparing differences in MR imaging markers and monitoring them over time, (b) identification of the optimal time point when early LITT induced effects (such as edema and mass effect) subside by monitoring TRC at subsequent time-points post-LITT, and (c) identification of contributions of individual MRI protocols towards characterizing LITT-TRC for epilepsy by identifying MR markers that change most dramatically over time and employ individual contributions to create a more optimal weighted MP-MRI temporal profile that can better characterize TRC compared to any individual imaging marker. A cohort of patients were monitored at different time points post-LITT via MP-MRI involving T1-w, T2-w, T2-GRE, T2-FLAIR, and apparent diffusion coefficient (ADC) protocols. Post affine registration of individual MRI protocols to a reference MRI protocol pre-LITT, differences in individual MR markers are computed on a per-voxel basis, at different time-points with respect to baseline (pre-LITT) MRI as well as across subsequent time-points. A time-dependent MRI profile corresponding to successful (seizure-free) is then created that captures changes in individual MR imaging markers over time. Our preliminary analysis on two patient studies suggests that (a) LITT related changes (attributed to swelling and edema) appear to subside within 4-weeks post-LITT, (b) ADC may be more sensitive for evaluating early TRC (up to 3-months), and T1-w may be more sensitive in evaluating early delayed TRC (1-month, 3-months), while T2-w and T2-FLAIR appeared to be more sensitive in identifying late TRC (around 6-months post-LITT) compared to the other MRI protocols under evaluation. T2-GRE was found to be only nominally sensitive in identifying TRC at any follow-up time-point post-LITT. The framework presented in this work thus serves as an important precursor to a comprehensive treatment evaluation framework that can be used to identify sensitive MR markers corresponding to patient response (seizure-freedom or seizure recurrence), with an ultimate objective of making prognostic predictions about patient outcome post-LITT.
Access to Mars from Earth-Moon Libration Point Orbits:. [Manifold and Direct Options
NASA Technical Reports Server (NTRS)
Kakoi, Masaki; Howell, Kathleen C.; Folta, David
2014-01-01
This investigation is focused specifically on transfers from Earth-Moon L(sub 1)/L(sub 2) libration point orbits to Mars. Initially, the analysis is based in the circular restricted three-body problem to utilize the framework of the invariant manifolds. Various departure scenarios are compared, including arcs that leverage manifolds associated with the Sun-Earth L(sub 2) orbits as well as non-manifold trajectories. For the manifold options, ballistic transfers from Earth-Moon L(sub 2) libration point orbits to Sun-Earth L(sub 1)/L(sub 2) halo orbits are first computed. This autonomous procedure applies to both departure and arrival between the Earth-Moon and Sun-Earth systems. Departure times in the lunar cycle, amplitudes and types of libration point orbits, manifold selection, and the orientation/location of the surface of section all contribute to produce a variety of options. As the destination planet, the ephemeris position for Mars is employed throughout the analysis. The complete transfer is transitioned to the ephemeris model after the initial design phase. Results for multiple departure/arrival scenarios are compared.
Analysis of the statistic al properties of pulses in atmospheric corona discharge
NASA Astrophysics Data System (ADS)
Aubrecht, L.; Koller, J.; Plocek, J.; Stanék, Z.
2000-03-01
The properties of the negative corona current pulses in a single point-to-plane configuration have been extensively studied by many investigators. The amplitude and the interval of these pulses are not generally constant and depend on many variables. The repetition rate and the amplitude of the pulses fluctuate in time. Since these fluctuations are subject to a certain probability distribution, the statistical processing was used for the analysis of the pulse fluctuations. The behavior of the pulses has been also investigated in a multipoint geometry configuration. The dependence of the behavior of the corona pulses on the gap lengths, the material, the shape of the point electrode, the number and separation of electrodes (in the multiple-point mode) has been investigated, too. No detailed study has been carried out up to now for this case. The attention has been devoted also to the study of the pulses on the points of live materials (needles of coniferous trees). This contribution describes recent studies of the statistical properties of the pulses for various conditions.
A noise thermometry investigation of the melting point of gallium at the NIM
NASA Astrophysics Data System (ADS)
Zhang, J. T.; Xue, S.
2006-06-01
This paper describes a study of the melting point of gallium with the new NIM Johnson noise thermometer (JNT). The new thermometer adopts the structure of switching correlator and commutator with the reference resistor maintained at the triple point of water. The electronic system of the new thermometer is basically the same as the current JNT, but the preamplifiers have been improved slightly. This study demonstrates that examining the characteristics of the noise signals in the frequency domain is of critical importance in constructing an improved new thermometer, where a power spectral analysis is found to be critical in establishing appropriate grounding for the new thermometer. The new JNT is tested on measurements of the thermodynamic temperature of the melting point of gallium, which give the thermodynamic temperature of 302.9160 K, with an overall integration time of 190 h and a combined standard uncertainty of 9.4 mK. The uncertainty analysis indicates that a standard combined uncertainty of 3 mK could be achieved with the new thermometer over an integration period of 1750 h.
Multiscale analysis of heart rate dynamics: entropy and time irreversibility measures.
Costa, Madalena D; Peng, Chung-Kang; Goldberger, Ary L
2008-06-01
Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and non-equilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools--multiscale entropy and multiscale time irreversibility--are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs.
Multiscale Analysis of Heart Rate Dynamics: Entropy and Time Irreversibility Measures
Peng, Chung-Kang; Goldberger, Ary L.
2016-01-01
Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and nonequilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools— multiscale entropy and multiscale time irreversibility—are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs. PMID:18172763
Thombare, Ram
2013-01-01
PURPOSE The purpose of this study was to decide the most appropriate point on tragus to be used as a reference point at time of marking ala tragus line while establishing occlusal plane. MATERIALS AND METHODS The data was collected in two groups of subjects: 1) Dentulous 2) Edentulous group having sample size of 30 for each group with equal gender distribution (15 males, 15 females each). Downs analysis was used for base value. Lateral cephalographs were taken for all selected subjects. Three points were marked on tragus as Superior (S), Middle (M), and Inferior (I) and were joined with ala (A) of the nose to form ala-tragus lines. The angle formed by each line (SA plane, MA plane, IA plane) with Frankfort Horizontal (FH) plane was measured by using custom made device and modified protractor in all dentulous and edentulous subjects. Also, in dentulous subjects angle between Frankfort Horizontal plane and natural occlusal plane was measured. The measurements obtained were subjected to the following statistical tests; descriptive analysis, Student's unpaired t-test and Pearson's correlation coefficient. RESULTS The results demonstrated, the mean angle COO (cant of occlusal plane) as 9.76°, inferior point on tragus had given the mean angular value of IFH [Angle between IA plane (plane formed by joining inferior point-I on tragus and ala of nose- A) and FH plane) as 10.40° and 10.56° in dentulous and edentulous subjects respectively which was the closest value to the angle COO and was comparable with the values of angle COO value in Downs analysis. Angulations of ala-tragus line marked from inferior point with occlusal plane in dentulous subject had given the smallest value 2.46° which showed that this ala-tragus line was nearly parallel to occlusal plane. CONCLUSION The inferior point marked on tragus is the most appropriate point for marking ala-tragus line. PMID:23508068
Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.
2014-01-01
Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point-within-transect and park-level effect. Our results suggest that this model can provide insight into the detection process during avian surveys and reduce bias in estimates of relative abundance but is best applied to surveys of species with greater availability (e.g., breeding songbirds).
Smoke-Point Properties of Nonbuoyant Round Laminar Jet Diffusion Flames
NASA Technical Reports Server (NTRS)
Urban, D. L.; Yuan, Z.-G.; Sunderland, R. B.; Lin, K.-C.; Dai, Z.; Faeth, G. M.
2000-01-01
The laminar smoke-point properties of nonbuoyant round laminar jet diffusion flames were studied emphasizing results from long duration (100-230 s) experiments at microgravity carried -out on- orbit in the Space Shuttle Columbia. Experimental conditions included ethylene-and propane-fueled flames burning in still air at an ambient temperature of 300 K, initial jet exit diameters of 1.6 and 2.7 mm, jet exit velocities of 170-1630 mm/s, jet exit Reynolds numbers of 46-172, characteristic flame residence times of 40-302 ms, and luminous flame lengths of 15-63 mm. The onset of laminar smoke-point conditions involved two flame configurations: closed-tip flames with first soot emissions along the flame axis and open-tip flames with first soot emissions from an annular ring about the flame axis. Open-tip flames were observed at large characteristic flame residence times with the onset of soot emissions associated with radiative quenching near the flame tip; nevertheless, unified correlations of laminar smoke-point properties were obtained that included both flame configurations. Flame lengths at laminar smoke-point conditions were well-correlated in terms of a corrected fuel flow rate suggested by a simplified analysis of flame shape. The present steady and nonbuoyant flames emitted soot more readily than earlier tests of nonbuoyant flames at microgravity using ground-based facilities and of buoyant flames at normal gravity due to reduced effects of unsteadiness, flame disturbances and buoyant motion. For example, laminar smoke-point flame lengths from ground-based microgravity measurements were up to 2.3 times longer and from buoyant flame measurements were up to 6.4 times longer than the present measurements at comparable conditions. Finally, present laminar smoke-point flame lengths were roughly inversely proportional to pressure, which is a somewhat slower variation than observed during earlier tests both at microgravity using ground-based facilities and at normal gravity.
Smoke-Point Properties of Nonbuoyant Round Laminar Jet Diffusion Flames. Appendix B
NASA Technical Reports Server (NTRS)
Urban, D. L.; Yuan, Z.-G.; Sunderland, P. B.; Lin, K.-C.; Dai, Z.; Faeth, G. M.; Ross, H. D. (Technical Monitor)
2000-01-01
The laminar smoke-point properties of non-buoyant round laminar jet diffusion flames were studied emphasizing results from long-duration (100-230 s) experiments at microgravity carried out in orbit aboard the space shuttle Columbia. Experimental conditions included ethylene- and propane-fueled flames burning in still air at an ambient temperature of 300 K, pressures of 35-130 kPa, jet exit diameters of 1.6 and 2.7 mm, jet exit velocities of 170-690 mm/s, jet exit Reynolds numbers of 46-172, characteristic flame residence times of 40-302 ms, and luminous flame lengths of 15-63 mm. Contrary to the normal-gravity laminar smoke point, in microgravity the onset of laminar smoke-point conditions involved two flame configurations: closed-tip flames with soot emissions along the flame axis and open-tip flames with soot emissions from an annular ring about the flame axis. Open-tip flames were observed at large characteristic flame residence times with the onset of soot emissions associated with radiative quenching near the flame tip: nevertheless, unified correlations of laminar smoke-point properties were obtained that included both flame configurations. Flame lengths at laminar smoke-point conditions were well correlated in terms of a corrected fuel flow rate suggested by a simplified analysis of flame shape. The present steady and nonbuoyant flames emitted soot more readily than non-buoyant flames in earlier tests using ground-based microgravity facilities and than buoyant flames at normal gravity, as a result of reduced effects of unsteadiness, flame disturbances, and buoyant motion. For example, present measurements of laminar smokepoint flame lengths at comparable conditions were up to 2.3 times shorter than ground-based microgravity measurements and up to 6.4 times shorter than buoyant flame measurements. Finally, present laminar smoke-point flame lengths were roughly inversely proportional to pressure to a degree that is a somewhat smaller than observed during earlier tests both at microgravity (using ground-based facilities) and at normal gravity,
Lightning Simulation and Design Program (LSDP)
NASA Astrophysics Data System (ADS)
Smith, D. A.
This computer program simulates a user-defined lighting configuration. It has been developed as a tool to aid in the design of exterior lighting systems. Although this program is used primarily for perimeter security lighting design, it has potential use for any application where the light can be approximated by a point source. A data base of luminaire photometric information is maintained for use with this program. The user defines the surface area to be illuminated with a rectangular grid and specifies luminaire positions. Illumination values are calculated for regularly spaced points in that area and isolux contour plots are generated. The numerical and graphical output for a particular site mode are then available for analysis. The amount of time spent on point-to-point illumination computation with this progress is much less than that required for tedious hand calculations. The ease with which various parameters can be interactively modified with the progress also reduces the time and labor expended. Consequently, the feasibility of design ideas can be examined, modified, and retested more thoroughly, and overall design costs can be substantially lessened by using this progress as an adjunct to the design process.
Computer-assisted 3D kinematic analysis of all leg joints in walking insects.
Bender, John A; Simpson, Elaine M; Ritzmann, Roy E
2010-10-26
High-speed video can provide fine-scaled analysis of animal behavior. However, extracting behavioral data from video sequences is a time-consuming, tedious, subjective task. These issues are exacerbated where accurate behavioral descriptions require analysis of multiple points in three dimensions. We describe a new computer program written to assist a user in simultaneously extracting three-dimensional kinematics of multiple points on each of an insect's six legs. Digital video of a walking cockroach was collected in grayscale at 500 fps from two synchronized, calibrated cameras. We improved the legs' visibility by painting white dots on the joints, similar to techniques used for digitizing human motion. Compared to manual digitization of 26 points on the legs over a single, 8-second bout of walking (or 106,496 individual 3D points), our software achieved approximately 90% of the accuracy with 10% of the labor. Our experimental design reduced the complexity of the tracking problem by tethering the insect and allowing it to walk in place on a lightly oiled glass surface, but in principle, the algorithms implemented are extensible to free walking. Our software is free and open-source, written in the free language Python and including a graphical user interface for configuration and control. We encourage collaborative enhancements to make this tool both better and widely utilized.
Palenzuela, D O; Benítez, J; Rivero, J; Serrano, R; Ganzó, O
1997-10-13
In the present work a concept proposed in 1992 by Dopotka and Giesendorf was applied to the quantitative analysis of antibodies to the p24 protein of HIV-1 in infected asymptomatic individuals and AIDS patients. Two approaches were analyzed, a linear model OD = b0 + b1.log(titer) and a nonlinear log(titer) = alpha.OD beta, similar to the Dopotka-Giesendorf's model. The above two proposed models adequately fit the dependence of the optical density values at a single point dilution, and titers achieved by the end point dilution method (EPDM). Nevertheless, the nonlinear model better fits the experimental data, according to residuals analysis. Classical EPDM was compared with the new single point dilution method (SPDM) using both models. The best correlation between titers calculated using both models and titers achieved by EPDM was obtained with the nonlinear model. The correlation coefficients for the nonlinear and linear models were r = 0.85 and r = 0.77, respectively. A new correction factor was introduced into the nonlinear model and this reduced the day-to-day variation of titer values. In general, SPDM saves time, reagents and is more precise and sensitive to changes in antibody levels, and therefore has a higher resolution than EPDM.
NASA Astrophysics Data System (ADS)
Javadi, Maryam; Shahrabi, Jamal
2014-03-01
The problems of facility location and the allocation of demand points to facilities are crucial research issues in spatial data analysis and urban planning. It is very important for an organization or governments to best locate its resources and facilities and efficiently manage resources to ensure that all demand points are covered and all the needs are met. Most of the recent studies, which focused on solving facility location problems by performing spatial clustering, have used the Euclidean distance between two points as the dissimilarity function. Natural obstacles, such as mountains and rivers, can have drastic impacts on the distance that needs to be traveled between two geographical locations. While calculating the distance between various supply chain entities (including facilities and demand points), it is necessary to take such obstacles into account to obtain better and more realistic results regarding location-allocation. In this article, new models were presented for location of urban facilities while considering geographical obstacles at the same time. In these models, three new distance functions were proposed. The first function was based on the analysis of shortest path in linear network, which was called SPD function. The other two functions, namely PD and P2D, were based on the algorithms that deal with robot geometry and route-based robot navigation in the presence of obstacles. The models were implemented in ArcGIS Desktop 9.2 software using the visual basic programming language. These models were evaluated using synthetic and real data sets. The overall performance was evaluated based on the sum of distance from demand points to their corresponding facilities. Because of the distance between the demand points and facilities becoming more realistic in the proposed functions, results indicated desired quality of the proposed models in terms of quality of allocating points to centers and logistic cost. Obtained results show promising improvements of the allocation, the logistics costs and the response time. It can also be inferred from this study that the P2D-based model and the SPD-based model yield similar results in terms of the facility location and the demand allocation. It is noted that the P2D-based model showed better execution time than the SPD-based model. Considering logistic costs, facility location and response time, the P2D-based model was appropriate choice for urban facility location problem considering the geographical obstacles.
The ALICE analysis train system
NASA Astrophysics Data System (ADS)
Zimmermann, Markus; ALICE Collaboration
2015-05-01
In the ALICE experiment hundreds of users are analyzing big datasets on a Grid system. High throughput and short turn-around times are achieved by a centralized system called the LEGO trains. This system combines analysis from different users in so-called analysis trains which are then executed within the same Grid jobs thereby reducing the number of times the data needs to be read from the storage systems. The centralized trains improve the performance, the usability for users and the bookkeeping in comparison to single user analysis. The train system builds upon the already existing ALICE tools, i.e. the analysis framework as well as the Grid submission and monitoring infrastructure. The entry point to the train system is a web interface which is used to configure the analysis and the desired datasets as well as to test and submit the train. Several measures have been implemented to reduce the time a train needs to finish and to increase the CPU efficiency.
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
Cabrieto, Jedelyn; Tuerlinckx, Francis; Kuppens, Peter; Hunyadi, Borbála; Ceulemans, Eva
2018-01-15
Detecting abrupt correlation changes in multivariate time series is crucial in many application fields such as signal processing, functional neuroimaging, climate studies, and financial analysis. To detect such changes, several promising correlation change tests exist, but they may suffer from severe loss of power when there is actually more than one change point underlying the data. To deal with this drawback, we propose a permutation based significance test for Kernel Change Point (KCP) detection on the running correlations. Given a requested number of change points K, KCP divides the time series into K + 1 phases by minimizing the within-phase variance. The new permutation test looks at how the average within-phase variance decreases when K increases and compares this to the results for permuted data. The results of an extensive simulation study and applications to several real data sets show that, depending on the setting, the new test performs either at par or better than the state-of-the art significance tests for detecting the presence of correlation changes, implying that its use can be generally recommended.
Dynamics of f(R) gravity models and asymmetry of time
NASA Astrophysics Data System (ADS)
Verma, Murli Manohar; Yadav, Bal Krishna
We solve the field equations of modified gravity for f(R) model in metric formalism. Further, we obtain the fixed points of the dynamical system in phase-space analysis of f(R) models, both with and without the effects of radiation. The stability of these points is studied against the perturbations in a smooth spatial background by applying the conditions on the eigenvalues of the matrix obtained in the linearized first-order differential equations. Following this, these fixed points are used for analyzing the dynamics of the system during the radiation, matter and acceleration-dominated phases of the universe. Certain linear and quadratic forms of f(R) are determined from the geometrical and physical considerations and the behavior of the scale factor is found for those forms. Further, we also determine the Hubble parameter H(t), the Ricci scalar R and the scale factor a(t) for these cosmic phases. We show the emergence of an asymmetry of time from the dynamics of the scalar field exclusively owing to the f(R) gravity in the Einstein frame that may lead to an arrow of time at a classical level.
Constructing Hopf bifurcation lines for the stability of nonlinear systems with two time delays
NASA Astrophysics Data System (ADS)
Nguimdo, Romain Modeste
2018-03-01
Although the plethora real-life systems modeled by nonlinear systems with two independent time delays, the algebraic expressions for determining the stability of their fixed points remain the Achilles' heel. Typically, the approach for studying the stability of delay systems consists in finding the bifurcation lines separating the stable and unstable parameter regions. This work deals with the parametric construction of algebraic expressions and their use for the determination of the stability boundaries of fixed points in nonlinear systems with two independent time delays. In particular, we concentrate on the cases for which the stability of the fixed points can be ascertained from a characteristic equation corresponding to that of scalar two-delay differential equations, one-component dual-delay feedback, or nonscalar differential equations with two delays for which the characteristic equation for the stability analysis can be reduced to that of a scalar case. Then, we apply our obtained algebraic expressions to identify either the parameter regions of stable microwaves generated by dual-delay optoelectronic oscillators or the regions of amplitude death in identical coupled oscillators.
Sastre I Riba, Sylvia; Fonseca-Pedrero, Eduardo; Santarén-Rosell, Marta; Urraca-Martínez, María Luz
2015-01-01
The objective of this study was to evaluate the satisfaction of an extracurricular enrichment program of the cognitive and personal management of participants with high intellectual ability. At the first time point, the sample consisted of n= 38 participants, and n= 20 parents; n= 48 participants at the second time point; and n= 60 participants at the third time point. The Satisfaction Questionnaire (CSA in Spanish), both for students (CSA-S) and for parents (CSA-P), was constructed. The CSA-S scores showed adequate psychometric properties. Exploratory factor analysis yielded a unidimensional structure. Cronbach’s alpha ranged between 85 and .86. Test-retest reliability was 0.45 (p<.05). The generalizability coefficient was .98. A high percentage of the sample was satisfied with the program, perceived improvements in cognitive and emotional management, motivation and interest in learning, and in the frequency and quality of their interpersonal relationships. The evaluation of educational programs is necessary in order to determine the efficacy and the effects of their implementation on the participants’ personal and intellectual management.
Dziarmaga, Jacek; Zurek, Wojciech H.
2014-01-01
Kibble-Zurek mechanism (KZM) uses critical scaling to predict density of topological defects and other excitations created in second order phase transitions. We point out that simply inserting asymptotic critical exponents deduced from the immediate vicinity of the critical point to obtain predictions can lead to results that are inconsistent with a more careful KZM analysis based on causality – on the comparison of the relaxation time of the order parameter with the “time distance” from the critical point. As a result, scaling of quench-generated excitations with quench rates can exhibit behavior that is locally (i.e., in the neighborhood of any given quench rate) well approximated by the power law, but with exponents that depend on that rate, and that are quite different from the naive prediction based on the critical exponents relevant for asymptotically long quench times. Kosterlitz-Thouless scaling (that governs e.g. Mott insulator to superfluid transition in the Bose-Hubbard model in one dimension) is investigated as an example of this phenomenon. PMID:25091996
Physical activity, sedentary behavior, and academic performance in Finnish children.
Syväoja, Heidi J; Kantomaa, Marko T; Ahonen, Timo; Hakonen, Harto; Kankaanpää, Anna; Tammelin, Tuija H
2013-11-01
This study aimed to determine the relationships between objectively measured and self-reported physical activity, sedentary behavior, and academic performance in Finnish children. Two hundred and seventy-seven children from five schools in the Jyväskylä school district in Finland (58% of the 475 eligible students, mean age = 12.2 yr, 56% girls) participated in the study in the spring of 2011. Self-reported physical activity and screen time were evaluated with questions used in the WHO Health Behavior in School-Aged Children study. Children's physical activity and sedentary time were measured objectively by using an ActiGraph GT1M/GT3X accelerometer for seven consecutive days. A cutoff value of 2296 counts per minute was used for moderate-to-vigorous physical activity (MVPA) and 100 counts per minute for sedentary time. Grade point averages were provided by the education services of the city of Jyväskylä. ANOVA and linear regression analysis were used to analyze the relationships among physical activity, sedentary behavior, and academic performance. Objectively measured MVPA (P = 0.955) and sedentary time (P = 0.285) were not associated with grade point average. However, self-reported MVPA had an inverse U-shaped curvilinear association with grade point average (P = 0.001), and screen time had a linear negative association with grade point average (P = 0.002), after adjusting for sex, children's learning difficulties, highest level of parental education, and amount of sleep. In this study, self-reported physical activity was directly, and screen time inversely, associated with academic achievement. Objectively measured physical activity and sedentary time were not associated with academic achievement. Objective and subjective measures may reflect different constructs and contexts of physical activity and sedentary behavior in association with academic outcomes.
Automatic short axis orientation of the left ventricle in 3D ultrasound recordings
NASA Astrophysics Data System (ADS)
Pedrosa, João.; Heyde, Brecht; Heeren, Laurens; Engvall, Jan; Zamorano, Jose; Papachristidis, Alexandros; Edvardsen, Thor; Claus, Piet; D'hooge, Jan
2016-04-01
The recent advent of three-dimensional echocardiography has led to an increased interest from the scientific community in left ventricle segmentation frameworks for cardiac volume and function assessment. An automatic orientation of the segmented left ventricular mesh is an important step to obtain a point-to-point correspondence between the mesh and the cardiac anatomy. Furthermore, this would allow for an automatic division of the left ventricle into the standard 17 segments and, thus, fully automatic per-segment analysis, e.g. regional strain assessment. In this work, a method for fully automatic short axis orientation of the segmented left ventricle is presented. The proposed framework aims at detecting the inferior right ventricular insertion point. 211 three-dimensional echocardiographic images were used to validate this framework by comparison to manual annotation of the inferior right ventricular insertion point. A mean unsigned error of 8, 05° +/- 18, 50° was found, whereas the mean signed error was 1, 09°. Large deviations between the manual and automatic annotations (> 30°) only occurred in 3, 79% of cases. The average computation time was 666ms in a non-optimized MATLAB environment, which potentiates real-time application. In conclusion, a successful automatic real-time method for orientation of the segmented left ventricle is proposed.
Ordóñez, Celestino; Cabo, Carlos; Sanz-Ablanedo, Enoc
2017-01-01
Mobile laser scanning (MLS) is a modern and powerful technology capable of obtaining massive point clouds of objects in a short period of time. Although this technology is nowadays being widely applied in urban cartography and 3D city modelling, it has some drawbacks that need to be avoided in order to strengthen it. One of the most important shortcomings of MLS data is concerned with the fact that it provides an unstructured dataset whose processing is very time-consuming. Consequently, there is a growing interest in developing algorithms for the automatic extraction of useful information from MLS point clouds. This work is focused on establishing a methodology and developing an algorithm to detect pole-like objects and classify them into several categories using MLS datasets. The developed procedure starts with the discretization of the point cloud by means of a voxelization, in order to simplify and reduce the processing time in the segmentation process. In turn, a heuristic segmentation algorithm was developed to detect pole-like objects in the MLS point cloud. Finally, two supervised classification algorithms, linear discriminant analysis and support vector machines, were used to distinguish between the different types of poles in the point cloud. The predictors are the principal component eigenvalues obtained from the Cartesian coordinates of the laser points, the range of the Z coordinate, and some shape-related indexes. The performance of the method was tested in an urban area with 123 poles of different categories. Very encouraging results were obtained, since the accuracy rate was over 90%. PMID:28640189
Numerical analysis of transient fields near thin-wire antennas and scatterers
NASA Astrophysics Data System (ADS)
Landt, J. A.
1981-11-01
Under the premise that `accelerated charge radiates,' one would expect radiation on wire structures to occur from driving points, ends of wires, bends in wires, or locations of lumped loading. Here, this premise is investigated in a series of numerical experiments. The numerical procedure is based on a moment-method solution of a thin-wire time-domain electric-field integral equation. The fields in the vicinity of wire structures are calculated for short impulsive-type excitations, and are viewed in a series of time sequences or snapshots. For these excitations, the fields are spatially limited in the radial dimension, and expand in spheres centered about points of radiation. These centers of radiation coincide with the above list of possible source regions. Time retardation permits these observations to be made clearly in the time domain, similar to time-range gating. In addition to providing insight into transient radiation processes, these studies show that the direction of energy flow is not always defined by Poynting's vector near wire structures.
Performance bounds on parallel self-initiating discrete-event
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.
NASA Astrophysics Data System (ADS)
Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.
2012-04-01
We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.
NASA Technical Reports Server (NTRS)
Conroy, Michael P.
2015-01-01
Lecture is an overview of Simulation technologies, methods and practices, as applied to current and past NASA programs. Focus is on sharing experience and the overall benefits to programs and projects of having appropriate simulation and analysis capabilities available at the correct point in a system lifecycle.
Transcriptome profiling reveals regulatory mechanisms underlying Corolla Senescence in Petunia
USDA-ARS?s Scientific Manuscript database
Genetic regulatory mechanisms that govern petal natural senescence in petunia is complicated and unclear. To identify key genes and pathways that regulate the process, we initiated a transcriptome analysis in petunia petals at four developmental time points, including petal opening without anthesis ...
Ice Wedge Polygon Bromide Tracer Experiment in Subsurface Flow, Barrow, Alaska, 2015-2016
Nathan Wales
2018-02-15
Time series of bromide tracer concentrations at several points within a low-centered polygon and a high-centered polygon. Concentration values were obtained from the analysis of water samples via ion chromatography with an accuracy of 0.01 mg/l.
NASA Astrophysics Data System (ADS)
Li, Xingxing
2014-05-01
Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to displacements is accompanied by a drift due to the potential uncompensated errors. Li et al. (2013) presented a temporal point positioning (TPP) method to quickly capture coseismic displacements with a single GPS receiver in real-time. The TPP approach can overcome the convergence problem of precise point positioning (PPP), and also avoids the integration and de-trending process of the variometric approach. The performance of TPP is demonstrated to be at few centimeters level of displacement accuracy for even twenty minutes interval with real-time precise orbit and clock products. In this study, we firstly present and compare the observation models and processing strategies of the current existing single-receiver methods for real-time GPS seismology. Furthermore, we propose several refinements to the variometric approach in order to eliminate the drift trend in the integrated coseismic displacements. The mathematical relationship between these methods is discussed in detail and their equivalence is also proved. The impact of error components such as satellite ephemeris, ionospheric delay, tropospheric delay, and geometry change on the retrieved displacements are carefully analyzed and investigated. Finally, the performance of these single-receiver approaches for real-time GPS seismology is validated using 1 Hz GPS data collected during the Tohoku-Oki earthquake (Mw 9.0, March 11, 2011) in Japan. It is shown that few centimeters accuracy of coseismic displacements is achievable. Keywords: High-rate GPS; real-time GPS seismology; a single receiver; PPP; variometric approach; temporal point positioning; error analysis; coseismic displacement; fault slip inversion;
A very deep IRAS survey at the north ecliptic pole
NASA Technical Reports Server (NTRS)
Houck, J. R.; Hacking, P. B.; Condon, J. J.
1987-01-01
The data from approximately 20 hours observation of the 4- to 6-square degree field surrounding the north ecliptic pole have been combined to produce a very deep IR survey at the four IRAS bands. Scans from both pointed and survey observations were included in the data analysis. At 12 and 25 microns the deep survey is limited by detector noise and is approximately 50 times deeper than the IRAS Point Source Catalog (PSC). At 60 microns the problems of source confusion and Galactic cirrus combine to limit the deep survey to approximately 12 times deeper than the PSC. These problems are so severe at 100 microns that flux values are only given for locations corresponding to sources selected at 60 microns. In all, 47 sources were detected at 12 microns, 37 at 25 microns, and 99 at 60 microns. The data-analysis procedures and the significance of the 12- and 60-micron source-count results are discussed.
Hostettler, Isabel Charlotte; Muroi, Carl; Richter, Johannes Konstantin; Schmid, Josef; Neidert, Marian Christoph; Seule, Martin; Boss, Oliver; Pangalu, Athina; Germans, Menno Robbert; Keller, Emanuela
2018-01-19
OBJECTIVE The aim of this study was to create prediction models for outcome parameters by decision tree analysis based on clinical and laboratory data in patients with aneurysmal subarachnoid hemorrhage (aSAH). METHODS The database consisted of clinical and laboratory parameters of 548 patients with aSAH who were admitted to the Neurocritical Care Unit, University Hospital Zurich. To examine the model performance, the cohort was randomly divided into a derivation cohort (60% [n = 329]; training data set) and a validation cohort (40% [n = 219]; test data set). The classification and regression tree prediction algorithm was applied to predict death, functional outcome, and ventriculoperitoneal (VP) shunt dependency. Chi-square automatic interaction detection was applied to predict delayed cerebral infarction on days 1, 3, and 7. RESULTS The overall mortality was 18.4%. The accuracy of the decision tree models was good for survival on day 1 and favorable functional outcome at all time points, with a difference between the training and test data sets of < 5%. Prediction accuracy for survival on day 1 was 75.2%. The most important differentiating factor was the interleukin-6 (IL-6) level on day 1. Favorable functional outcome, defined as Glasgow Outcome Scale scores of 4 and 5, was observed in 68.6% of patients. Favorable functional outcome at all time points had a prediction accuracy of 71.1% in the training data set, with procalcitonin on day 1 being the most important differentiating factor at all time points. A total of 148 patients (27%) developed VP shunt dependency. The most important differentiating factor was hyperglycemia on admission. CONCLUSIONS The multiple variable analysis capability of decision trees enables exploration of dependent variables in the context of multiple changing influences over the course of an illness. The decision tree currently generated increases awareness of the early systemic stress response, which is seemingly pertinent for prognostication.
The RTOG Outcomes Model: economic end points and measures.
Konski, Andre; Watkins-Bruner, Deborah
2004-03-01
Recognising the value added by economic evaluations of clinical trials and the interaction of clinical, humanistic and economic end points, the Radiation Therapy Oncology Group (RTOG) has developed an Outcomes Model that guides the comprehensive assessment of this triad of end points. This paper will focus on the economic component of the model. The Economic Impact Committee was founded in 1994 to study the economic impact of clinical trials of cancer care. A steep learning curve ensued with considerable time initially spent understanding the methodology of economic analysis. Since then, economic analyses have been performed on RTOG clinical trials involving treatments for patients with non-small cell lung cancer, locally-advanced head and neck cancer and prostate cancer. As the care of cancer patients evolves with time, so has the economic analyses performed by the Economic Impact Committee. This paper documents the evolution of the cost-effectiveness analyses of RTOG from performing average cost-utility analysis to more technically sophisticated Monte Carlo simulation of Markov models, to incorporating prospective economic analyses as an initial end point. Briefly, results indicated that, accounting for quality-adjusted survival, concurrent chemotherapy and radiation for the treatment of non-small cell lung cancer, more aggressive radiation fractionation schedules for head and neck cancer and the addition of hormone therapy to radiation for prostate cancer are within the range of economically acceptable recommendations. The RTOG economic analyses have provided information that can further inform clinicians and policy makers of the value added of new or improved treatments.
Using natural archives to detect climate and environmental tipping points in the Earth System
NASA Astrophysics Data System (ADS)
Thomas, Zoë A.
2016-11-01
'Tipping points' in the Earth system are characterised by a nonlinear response to gradual forcing, and may have severe and wide-ranging impacts. Many abrupt events result from simple underlying system dynamics termed 'critical transitions' or 'bifurcations'. One of the best ways to identify and potentially predict threshold behaviour in the climate system is through analysis of natural ('palaeo') archives. Specifically, on the approach to a tipping point, early warning signals can be detected as characteristic fluctuations in a time series as a system loses stability. Testing whether these early warning signals can be detected in highly complex real systems is a key challenge, since much work is either theoretical or only tested with simple models. This is particularly problematic in palaeoclimate and palaeoenvironmental records with low resolution, non-equidistant data, which can limit accurate analysis. Here, a range of different datasets are examined to explore generic rules that can be used to detect such dramatic events. A number of key criteria are identified to be necessary for the reliable identification of early warning signals in natural archives, most crucially, the need for a low-noise record of sufficient data length, resolution and accuracy. A deeper understanding of the underlying system dynamics is required to inform the development of more robust system-specific indicators, or to indicate the temporal resolution required, given a known forcing. This review demonstrates that time series precursors from natural archives provide a powerful means of forewarning tipping points within the Earth System.
Fanning, J; Porter, G; Awick, E A; Wójcicki, T R; Gothe, N P; Roberts, S A; Ehlers, D K; Motl, R W; McAuley, E
2016-06-01
In the present study, we examined the influence of a home-based, DVD-delivered exercise intervention on daily sedentary time and breaks in sedentary time in older adults. Between 2010 and 2012, older adults (i.e., aged 65 or older) residing in Illinois (N = 307) were randomized into a 6-month home-based, DVD-delivered exercise program (i.e., FlexToBa; FTB) or a waitlist control. Participants completed measurements prior to the first week (baseline), following the intervention period (month 6), and after a 6 month no-contact follow-up (month 12). Sedentary behavior was measured objectively using accelerometers for 7 consecutive days at each time point. Differences in daily sedentary time and breaks between groups and across the three time points were examined using mixed-factor analysis of variance (mixed ANOVA) and analysis of covariance (ANCOVA). Mixed ANOVA models revealed that daily minutes of sedentary time did not differ by group or time. The FTB condition, however, demonstrated a greater number of daily breaks in sedentary time relative to the control condition (p = .02). ANCOVA models revealed a non-significant effect favoring FTB at month 6, and a significant difference between groups at month 12 (p = .02). While overall sedentary time did not differ between groups, the DVD-delivered exercise intervention was effective for maintaining a greater number of breaks when compared with the control condition. Given the accumulating evidence emphasizing the importance of breaking up sedentary time, these findings have important implications for the design of future health behavior interventions.
Guyennon, Nicolas; Cerretto, Giancarlo; Tavella, Patrizia; Lahaye, François
2009-08-01
In recent years, many national timing laboratories have installed geodetic Global Positioning System receivers together with their traditional GPS/GLONASS Common View receivers and Two Way Satellite Time and Frequency Transfer equipment. Many of these geodetic receivers operate continuously within the International GNSS Service (IGS), and their data are regularly processed by IGS Analysis Centers. From its global network of over 350 stations and its Analysis Centers, the IGS generates precise combined GPS ephemeredes and station and satellite clock time series referred to the IGS Time Scale. A processing method called Precise Point Positioning (PPP) is in use in the geodetic community allowing precise recovery of GPS antenna position, clock phase, and atmospheric delays by taking advantage of these IGS precise products. Previous assessments, carried out at Istituto Nazionale di Ricerca Metrologica (INRiM; formerly IEN) with a PPP implementation developed at Natural Resources Canada (NRCan), showed PPP clock solutions have better stability over short/medium term than GPS CV and GPS P3 methods and significantly reduce the day-boundary discontinuities when used in multi-day continuous processing, allowing time-limited, campaign-style time-transfer experiments. This paper reports on follow-on work performed at INRiM and NRCan to further characterize and develop the PPP method for time transfer applications, using data from some of the National Metrology Institutes. We develop a processing procedure that takes advantage of the improved stability of the phase-connected multi-day PPP solutions while allowing the generation of continuous clock time series, more applicable to continuous operation/monitoring of timing equipment.
NASA Astrophysics Data System (ADS)
Lenoir, Guillaume; Crucifix, Michel
2018-03-01
We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion article (Lenoir and Crucifix, 2018). All the methods presented in this paper are available to the reader in the Python package WAVEPAL.
Johnson, T S; Andriacchi, T P; Erdman, A G
2004-01-01
Various uses of the screw or helical axis have previously been reported in the literature in an attempt to quantify the complex displacements and coupled rotations of in vivo human knee kinematics. Multiple methods have been used by previous authors to calculate the axis parameters, and it has been theorized that the mathematical stability and accuracy of the finite helical axis (FHA) is highly dependent on experimental variability and rotation increment spacing between axis calculations. Previous research has not addressed the sensitivity of the FHA for true in vivo data collection, as required for gait laboratory analysis. This research presents a controlled series of experiments simulating continuous data collection as utilized in gait analysis to investigate the sensitivity of the three-dimensional finite screw axis parameters of rotation, displacement, orientation and location with regard to time step increment spacing, utilizing two different methods for spatial location. Six-degree-of-freedom motion parameters are measured for an idealized rigid body knee model that is constrained to a planar motion profile for the purposes of error analysis. The kinematic data are collected using a multicamera optoelectronic system combined with an error minimization algorithm known as the point cluster method. Rotation about the screw axis is seen to be repeatable, accurate and time step increment insensitive. Displacement along the axis is highly dependent on time step increment sizing, with smaller rotation angles between calculations producing more accuracy. Orientation of the axis in space is accurate with only a slight filtering effect noticed during motion reversal. Locating the screw axis by a projected point onto the screw axis from the mid-point of the finite displacement is found to be less sensitive to motion reversal than finding the intersection of the axis with a reference plane. A filtering effect of the spatial location parameters was noted for larger time step increments during periods of little or no rotation.
NASA Astrophysics Data System (ADS)
Donges, J. F.; Schleussner, C.-F.; Siegmund, J. F.; Donner, R. V.
2016-05-01
Studying event time series is a powerful approach for analyzing the dynamics of complex dynamical systems in many fields of science. In this paper, we describe the method of event coincidence analysis to provide a framework for quantifying the strength, directionality and time lag of statistical interrelationships between event series. Event coincidence analysis allows to formulate and test null hypotheses on the origin of the observed interrelationships including tests based on Poisson processes or, more generally, stochastic point processes with a prescribed inter-event time distribution and other higher-order properties. Applying the framework to country-level observational data yields evidence that flood events have acted as triggers of epidemic outbreaks globally since the 1950s. Facing projected future changes in the statistics of climatic extreme events, statistical techniques such as event coincidence analysis will be relevant for investigating the impacts of anthropogenic climate change on human societies and ecosystems worldwide.
Predicting Bradycardia in Preterm Infants Using Point Process Analysis of Heart Rate.
Gee, Alan H; Barbieri, Riccardo; Paydarfar, David; Indic, Premananda
2017-09-01
Episodes of bradycardia are common and recur sporadically in preterm infants, posing a threat to the developing brain and other vital organs. We hypothesize that bradycardias are a result of transient temporal destabilization of the cardiac autonomic control system and that fluctuations in the heart rate signal might contain information that precedes bradycardia. We investigate infant heart rate fluctuations with a novel application of point process theory. In ten preterm infants, we estimate instantaneous linear measures of the heart rate signal, use these measures to extract statistical features of bradycardia, and propose a simplistic framework for prediction of bradycardia. We present the performance of a prediction algorithm using instantaneous linear measures (mean area under the curve = 0.79 ± 0.018) for over 440 bradycardia events. The algorithm achieves an average forecast time of 116 s prior to bradycardia onset (FPR = 0.15). Our analysis reveals that increased variance in the heart rate signal is a precursor of severe bradycardia. This increase in variance is associated with an increase in power from low content dynamics in the LF band (0.04-0.2 Hz) and lower multiscale entropy values prior to bradycardia. Point process analysis of the heartbeat time series reveals instantaneous measures that can be used to predict infant bradycardia prior to onset. Our findings are relevant to risk stratification, predictive monitoring, and implementation of preventative strategies for reducing morbidity and mortality associated with bradycardia in neonatal intensive care units.
Minkwitz, Susann; Schmock, Aysha; Kurtoglu, Alper; Tsitsilonis, Serafeim; Manegold, Sebastian; Klatte-Schulz, Franka
2017-01-01
A balance between matrix metalloproteinases (MMPs) and their inhibitors (TIMPs) is required to maintain tendon homeostasis. Variation in this balance over time might impact on the success of tendon healing. This study aimed to analyze structural changes and the expression profile of MMPs and TIMPs in human Achilles tendons at different time-points after rupture. Biopsies from 37 patients with acute Achilles tendon rupture were taken at surgery and grouped according to time after rupture: early (2–4 days), middle (5–6 days), and late (≥7 days), and intact Achilles tendons served as control. The histological score increased from the early to the late time-point after rupture, indicating the progression towards a more degenerative status. In comparison to intact tendons, qRT-PCR analysis revealed a significantly increased expression of MMP-1, -2, -13, TIMP-1, COL1A1, and COL3A1 in ruptured tendons, whereas TIMP-3 decreased. Comparing the changes over time post rupture, the expression of MMP-9, -13, and COL1A1 significantly increased, whereas MMP-3 and -10 expression decreased. TIMP expression was not significantly altered over time. MMP staining by immunohistochemistry was positive in the ruptured tendons exemplarily analyzed from early and late time-points. The study demonstrates a pivotal contribution of all investigated MMPs and TIMP-1, but a minor role of TIMP-2, -3, and -4, in the early human tendon healing process. PMID:29053586
A novel Bayesian approach to acoustic emission data analysis.
Agletdinov, E; Pomponi, E; Merson, D; Vinogradov, A
2016-12-01
Acoustic emission (AE) technique is a popular tool for materials characterization and non-destructive testing. Originating from the stochastic motion of defects in solids, AE is a random process by nature. The challenging problem arises whenever an attempt is made to identify specific points corresponding to the changes in the trends in the fluctuating AE time series. A general Bayesian framework is proposed for the analysis of AE time series, aiming at automated finding the breakpoints signaling a crossover in the dynamics of underlying AE sources. Copyright © 2016 Elsevier B.V. All rights reserved.
Point pattern analysis of FIA data
Chris Woodall
2002-01-01
Point pattern analysis is a branch of spatial statistics that quantifies the spatial distribution of points in two-dimensional space. Point pattern analysis was conducted on stand stem-maps from FIA fixed-radius plots to explore point pattern analysis techniques and to determine the ability of pattern descriptions to describe stand attributes. Results indicate that the...
NASA Astrophysics Data System (ADS)
Eduardo Virgilio Silva, Luiz; Otavio Murta, Luiz
2012-12-01
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiffmax) for q ≠1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiffmax values were capable of distinguish HRV groups (p-values 5.10×10-3, 1.11×10-7, and 5.50×10-7 for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis.
Kim, H-I; Park, M S; Song, K J; Woo, Y; Hyung, W J
2014-10-01
The learning curve of robotic gastrectomy has not yet been evaluated in comparison with the laparoscopic approach. We compared the learning curves of robotic gastrectomy and laparoscopic gastrectomy based on operation time and surgical success. We analyzed 172 robotic and 481 laparoscopic distal gastrectomies performed by single surgeon from May 2003 to April 2009. The operation time was analyzed using a moving average and non-linear regression analysis. Surgical success was evaluated by a cumulative sum plot with a target failure rate of 10%. Surgical failure was defined as laparoscopic or open conversion, insufficient lymph node harvest for staging, resection margin involvement, postoperative morbidity, and mortality. Moving average and non-linear regression analyses indicated stable state for operation time at 95 and 121 cases in robotic gastrectomy, and 270 and 262 cases in laparoscopic gastrectomy, respectively. The cumulative sum plot identified no cut-off point for surgical success in robotic gastrectomy and 80 cases in laparoscopic gastrectomy. Excluding the initial 148 laparoscopic gastrectomies that were performed before the first robotic gastrectomy, the two groups showed similar number of cases to reach steady state in operation time, and showed no cut-off point in analysis of surgical success. The experience of laparoscopic surgery could affect the learning process of robotic gastrectomy. An experienced laparoscopic surgeon requires fewer cases of robotic gastrectomy to reach steady state. Moreover, the surgical outcomes of robotic gastrectomy were satisfactory. Copyright © 2013 Elsevier Ltd. All rights reserved.
Experimental and numerical analysis of metal leaching from fly ash-amended highway bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetin, Bora; Aydilek, Ahmet H., E-mail: aydilek@umd.edu; Li, Lin
2012-05-15
Highlights: Black-Right-Pointing-Pointer This study is the evaluation of leaching potential of fly ash-lime mixed soils. Black-Right-Pointing-Pointer This objective is met with experimental and numerical analysis. Black-Right-Pointing-Pointer Zn leaching decreases with increase in fly ash content while Ba, B, Cu increases. Black-Right-Pointing-Pointer Decrease in lime content promoted leaching of Ba, B and Cu while Zn increases. Black-Right-Pointing-Pointer Numerical analysis predicted lower field metal concentrations. - Abstract: A study was conducted to evaluate the leaching potential of unpaved road materials (URM) mixed with lime activated high carbon fly ashes and to evaluate groundwater impacts of barium, boron, copper, and zinc leaching. Thismore » objective was met by a combination of batch water leach tests, column leach tests, and computer modeling. The laboratory tests were conducted on soil alone, fly ash alone, and URM-fly ash-lime kiln dust mixtures. The results indicated that an increase in fly ash and lime content has significant effects on leaching behavior of heavy metals from URM-fly ash mixture. An increase in fly ash content and a decrease in lime content promoted leaching of Ba, B and Cu whereas Zn leaching was primarily affected by the fly ash content. Numerically predicted field metal concentrations were significantly lower than the peak metal concentrations obtained in laboratory column leach tests, and field concentrations decreased with time and distance due to dispersion in soil vadose zone.« less
Real-Time Tropospheric Delay Estimation using IGS Products
NASA Astrophysics Data System (ADS)
Stürze, Andrea; Liu, Sha; Söhne, Wolfgang
2014-05-01
The Federal Agency for Cartography and Geodesy (BKG) routinely provides zenith tropospheric delay (ZTD) parameter for the assimilation in numerical weather models since more than 10 years. Up to now the results flowing into the EUREF Permanent Network (EPN) or E-GVAP (EUMETNET EIG GNSS water vapour programme) analysis are based on batch processing of GPS+GLONASS observations in differential network mode. For the recently started COST Action ES1206 about "Advanced Global Navigation Satellite Systems tropospheric products for monitoring severe weather events and climate" (GNSS4SWEC), however, rapid updates in the analysis of the atmospheric state for nowcasting applications require changing the processing strategy towards real-time. In the RTCM SC104 (Radio Technical Commission for Maritime Services, Special Committee 104) a format combining the advantages of Precise Point Positioning (PPP) and Real-Time Kinematic (RTK) is under development. The so-called State Space Representation approach is defining corrections, which will be transferred in real-time to the user e.g. via NTRIP (Network Transport of RTCM via Internet Protocol). Meanwhile messages for precise orbits, satellite clocks and code biases compatible to the basic PPP mode using IGS products are defined. Consequently, the IGS Real-Time Service (RTS) was launched in 2013 in order to extend the well-known precise orbit and clock products by a real-time component. Further messages e.g. with respect to ionosphere or phase biases are foreseen. Depending on the level of refinement, so different accuracies up to the RTK level shall be reachable. In co-operation of BKG and the Technical University of Darmstadt the real-time software GEMon (GREF EUREF Monitoring) is under development. GEMon is able to process GPS and GLONASS observation and RTS product data streams in PPP mode. Furthermore, several state-of-the-art troposphere models, for example based on numerical weather prediction data, are implemented. Hence, it opens the possibility to evaluate the potential of troposphere parameter determination in real-time and its effect to Precise Point Positioning. Starting with an offline investigation of the influence of different RTS products and a priori troposphere models the configuration delivering the best results is used for a real-time processing of the GREF (German Geodetic Reference) network over a suitable period of time. The evaluation of the derived ZTD parameters and station heights is done with respect to well proven GREF, EUREF, IGS, and E-GVAP analysis results. Keywords: GNSS, Zenith Tropospheric Delay, Real-time Precise Point Positioning
Universal statistics of terminal dynamics before collapse
NASA Astrophysics Data System (ADS)
Lenner, Nicolas; Eule, Stephan; Wolf, Fred
Recent biological developments have both drastically increased the precision as well as amount of generated data, allowing for a switching from pure mean value characterization of the process under consideration to an analysis of the whole ensemble, exploiting the stochastic nature of biology. We focus on the general class of non-equilibrium processes with distinguished terminal points as can be found in cell fate decision, check points or cognitive neuroscience. Aligning the data to a terminal point (e.g. represented as an absorbing boundary) allows to device a general methodology to characterize and reverse engineer the terminating history. Using a small noise approximation we derive mean variance and covariance of the aligned data for general finite time singularities.
Estimating average annual per cent change in trend analysis
Clegg, Limin X; Hankey, Benjamin F; Tiwari, Ram; Feuer, Eric J; Edwards, Brenda K
2009-01-01
Trends in incidence or mortality rates over a specified time interval are usually described by the conventional annual per cent change (cAPC), under the assumption of a constant rate of change. When this assumption does not hold over the entire time interval, the trend may be characterized using the annual per cent changes from segmented analysis (sAPCs). This approach assumes that the change in rates is constant over each time partition defined by the transition points, but varies among different time partitions. Different groups (e.g. racial subgroups), however, may have different transition points and thus different time partitions over which they have constant rates of change, making comparison of sAPCs problematic across groups over a common time interval of interest (e.g. the past 10 years). We propose a new measure, the average annual per cent change (AAPC), which uses sAPCs to summarize and compare trends for a specific time period. The advantage of the proposed AAPC is that it takes into account the trend transitions, whereas cAPC does not and can lead to erroneous conclusions. In addition, when the trend is constant over the entire time interval of interest, the AAPC has the advantage of reducing to both cAPC and sAPC. Moreover, because the estimated AAPC is based on the segmented analysis over the entire data series, any selected subinterval within a single time partition will yield the same AAPC estimate—that is it will be equal to the estimated sAPC for that time partition. The cAPC, however, is re-estimated using data only from that selected subinterval; thus, its estimate may be sensitive to the subinterval selected. The AAPC estimation has been incorporated into the segmented regression (free) software Joinpoint, which is used by many registries throughout the world for characterizing trends in cancer rates. Copyright © 2009 John Wiley & Sons, Ltd. PMID:19856324
Toledo, Eran; Collins, Keith A; Williams, Ursula; Lammertin, Georgeanne; Bolotin, Gil; Raman, Jai; Lang, Roberto M; Mor-Avi, Victor
2005-12-01
Echocardiographic quantification of myocardial perfusion is based on analysis of contrast replenishment after destructive high-energy ultrasound impulses (flash-echo). This technique is limited by nonuniform microbubble destruction and the dependency on exponential fitting of a small number of noisy time points. We hypothesized that brief interruptions of contrast infusion (ICI) would result in uniform contrast clearance followed by slow replenishment and, thus, would allow analysis from multiple data points without exponential fitting. Electrocardiographic-triggered images were acquired in 14 isolated rabbit hearts (Langendorff) at 3 levels of coronary flow (baseline, 50%, and 15%) during contrast infusion (Definity) with flash-echo and with a 20-second infusion interruption. Myocardial videointensity was measured over time from flash-echo sequences, from which characteristic constant beta was calculated using an exponential fit. Peak contrast inflow rate was calculated from ICI data using analysis of local time derivatives. Computer simulations were used to investigate the effects of noise on the accuracy of peak contrast inflow rate and beta calculations. ICI resulted in uniform contrast clearance and baseline replenishment times of 15 to 25 cardiac cycles. Calculated peak contrast inflow rate followed the changes in coronary flow in all hearts at both levels of reduced flow (P < .05) and had a low intermeasurement variability of 7 +/- 6%. With flash-echo, contrast clearance was less uniform and baseline replenishment times were only 4 to 6 cardiac cycles. beta Decreased significantly only at 15% flow, and had intermeasurement variability of 42 +/- 33%. Computer simulations showed that measurement errors in both perfusion indices increased with noise, but beta had larger errors at higher rates of contrast inflow. ICI provides the basis for accurate and reproducible quantification of myocardial perfusion using fast and robust numeric analysis, and may constitute an alternative to the currently used techniques.
Krüger, Melanie; Straube, Andreas; Eggert, Thomas
2017-01-01
In recent years, theory-building in motor neuroscience and our understanding of the synergistic control of the redundant human motor system has significantly profited from the emergence of a range of different mathematical approaches to analyze the structure of movement variability. Approaches such as the Uncontrolled Manifold method or the Noise-Tolerance-Covariance decomposition method allow to detect and interpret changes in movement coordination due to e.g., learning, external task constraints or disease, by analyzing the structure of within-subject, inter-trial movement variability. Whereas, for cyclical movements (e.g., locomotion), mathematical approaches exist to investigate the propagation of movement variability in time (e.g., time series analysis), similar approaches are missing for discrete, goal-directed movements, such as reaching. Here, we propose canonical correlation analysis as a suitable method to analyze the propagation of within-subject variability across different time points during the execution of discrete movements. While similar analyses have already been applied for discrete movements with only one degree of freedom (DoF; e.g., Pearson's product-moment correlation), canonical correlation analysis allows to evaluate the coupling of inter-trial variability across different time points along the movement trajectory for multiple DoF-effector systems, such as the arm. The theoretical analysis is illustrated by empirical data from a study on reaching movements under normal and disturbed proprioception. The results show increased movement duration, decreased movement amplitude, as well as altered movement coordination under ischemia, which results in a reduced complexity of movement control. Movement endpoint variability is not increased under ischemia. This suggests that healthy adults are able to immediately and efficiently adjust the control of complex reaching movements to compensate for the loss of proprioceptive information. Further, it is shown that, by using canonical correlation analysis, alterations in movement coordination that indicate changes in the control strategy concerning the use of motor redundancy can be detected, which represents an important methodical advance in the context of neuromechanics.
Heuristic Modeling for TRMM Lifetime Predictions
NASA Technical Reports Server (NTRS)
Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.
1996-01-01
Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.
NASA Astrophysics Data System (ADS)
Kellerman, Adam; Makarevich, Roman; Spanswick, Emma; Donovan, Eric; Shprits, Yuri
2016-07-01
Energetic electrons in the 10's of keV range precipitate to the upper D- and lower E-region ionosphere, and are responsible for enhanced ionization. The same particles are important in the inner magnetosphere, as they provide a source of energy for waves, and thus relate to relativistic electron enhancements in Earth's radiation belts.In situ observations of plasma populations and waves are usually limited to a single point, which complicates temporal and spatial analysis. Also, the lifespan of satellite missions is often limited to several years which does not allow one to infer long-term climatology of particle precipitation, important for affecting ionospheric conditions at high latitudes. Multi-point remote sensing of the ionospheric plasma conditions can provide a global view of both ionospheric and magnetospheric conditions, and the coupling between magnetospheric and ionospheric phenomena can be examined on time-scales that allow comprehensive statistical analysis. In this study we utilize multi-point riometer measurements in conjunction with in situ satellite data, and physics-based modeling to investigate the spatio-temporal and energy-dependent response of riometer absorption. Quantifying this relationship may be a key to future advancements in our understanding of the complex D-region ionosphere, and may lead to enhanced specification of auroral precipitation both during individual events and over climatological time-scales.
Time-series analysis of foreign exchange rates using time-dependent pattern entropy
NASA Astrophysics Data System (ADS)
Ishizaki, Ryuji; Inoue, Masayoshi
2013-08-01
Time-dependent pattern entropy is a method that reduces variations to binary symbolic dynamics and considers the pattern of symbols in a sliding temporal window. We use this method to analyze the instability of daily variations in foreign exchange rates, in particular, the dollar-yen rate. The time-dependent pattern entropy of the dollar-yen rate was found to be high in the following periods: before and after the turning points of the yen from strong to weak or from weak to strong, and the period after the Lehman shock.
An analytic data analysis method for oscillatory slug tests.
Chen, Chia-Shyun
2006-01-01
An analytical data analysis method is developed for slug tests in partially penetrating wells in confined or unconfined aquifers of high hydraulic conductivity. As adapted from the van der Kamp method, the determination of the hydraulic conductivity is based on the occurrence times and the displacements of the extreme points measured from the oscillatory data and their theoretical counterparts available in the literature. This method is applied to two sets of slug test response data presented by Butler et al.: one set shows slow damping with seven discernable extremities, and the other shows rapid damping with three extreme points. The estimates of the hydraulic conductivity obtained by the analytic method are in good agreement with those determined by an available curve-matching technique.
NASA Astrophysics Data System (ADS)
Richman, Barbara T.
Reports of Loch Ness monsters, Bigfoot, and the Yeti spring u p from time to time, sparking scientific controversy about the veracity of these observations. Now an organization has been established to help cull, analyze, and disseminate information on the alleged creatures. The International Society of Cryptozoology, formed at a January meeting at the U.S. National Museum of Natural History of the Smithsonian Institution, will serve as the focal point for the investigation, analysis, publication, and discussion of animals of unexpected form or size or of unexpected occurrences in time or space.
Analogical Inference and Analogical Access.
1987-08-04
Dedre Gentner 13a TYPE OF REPORT 13b TIME COVERED 14. DATE OF REPORT (Year, Month, Day) 15. PAGE COUNT Technical Report FROM 85-9-1 TO 88-8-3j 87-4-8 48...systems of relations, rather than isolated predicates. I will also use the term systeeaetcity at times to refer to the presence of a system of relations...agreement among the judgest they were within one point of each other 97% of the time . 10. These patterns were confirmed by an analysis of variance and by
Two-craft Coulomb formation study about circular orbits and libration points
NASA Astrophysics Data System (ADS)
Inampudi, Ravi Kishore
This dissertation investigates the dynamics and control of a two-craft Coulomb formation in circular orbits and at libration points; it addresses relative equilibria, stability and optimal reconfigurations of such formations. The relative equilibria of a two-craft tether formation connected by line-of-sight elastic forces moving in circular orbits and at libration points are investigated. In circular Earth orbits and Earth-Moon libration points, the radial, along-track, and orbit normal great circle equilibria conditions are found. An example of modeling the tether force using Coulomb force is discussed. Furthermore, the non-great-circle equilibria conditions for a two-spacecraft tether structure in circular Earth orbit and at collinear libration points are developed. Then the linearized dynamics and stability analysis of a 2-craft Coulomb formation at Earth-Moon libration points are studied. For orbit-radial equilibrium, Coulomb forces control the relative distance between the two satellites. The gravity gradient torques on the formation due to the two planets help stabilize the formation. Similar analysis is performed for along-track and orbit-normal relative equilibrium configurations. Where necessary, the craft use a hybrid thrusting-electrostatic actuation system. The two-craft dynamics at the libration points provide a general framework with circular Earth orbit dynamics forming a special case. In the presence of differential solar drag perturbations, a Lyapunov feedback controller is designed to stabilize a radial equilibrium, two-craft Coulomb formation at collinear libration points. The second part of the thesis investigates optimal reconfigurations of two-craft Coulomb formations in circular Earth orbits by applying nonlinear optimal control techniques. The objective of these reconfigurations is to maneuver the two-craft formation between two charged equilibria configurations. The reconfiguration of spacecraft is posed as an optimization problem using the calculus of variations approach. The optimality criteria are minimum time, minimum acceleration of the separation distance, minimum Coulomb and electric propulsion fuel usage, and minimum electrical power consumption. The continuous time problem is discretized using a pseudospectral method, and the resulting finite dimensional problem is solved using a sequential quadratic programming algorithm. The software package, DIDO, implements this approach. This second part illustrates how pseudospectral methods significantly simplify the solution-finding process.
Experiments in Wave Record Analysis.
1980-09-01
manipulation of wave records in digital form to produce a power density spectrum (PDS) with great efficiency. The PDS gives a presentation of the...instantaneous surface elevation digital points (the zero level reference). The individual period, Ti, was taken as the time difference between two successive...CONCLUSIONS This thesis presents the results of experiments in the analysis of ocean wave records. For this purpose 19 digitized records obtained from a wave
On the Relation Between Facular Bright Points and the Magnetic Field
NASA Astrophysics Data System (ADS)
Berger, Thomas; Shine, Richard; Tarbell, Theodore; Title, Alan; Scharmer, Goran
1994-12-01
Multi-spectral images of magnetic structures in the solar photosphere are presented. The images were obtained in the summers of 1993 and 1994 at the Swedish Solar Telescope on La Palma using the tunable birefringent Solar Optical Universal Polarimeter (SOUP filter), a 10 Angstroms wide interference filter tuned to 4304 Angstroms in the band head of the CH radical (the Fraunhofer G-band), and a 3 Angstroms wide interference filter centered on the Ca II--K absorption line. Three large format CCD cameras with shuttered exposures on the order of 10 msec and frame rates of up to 7 frames per second were used to create time series of both quiet and active region evolution. The full field--of--view is 60times 80 arcseconds (44times 58 Mm). With the best seeing, structures as small as 0.22 arcseconds (160 km) in diameter are clearly resolved. Post--processing of the images results in rigid coalignment of the image sets to an accuracy comparable to the spatial resolution. Facular bright points with mean diameters of 0.35 arcseconds (250 km) and elongated filaments with lengths on the order of arcseconds (10(3) km) are imaged with contrast values of up to 60 % by the G--band filter. Overlay of these images on contemporal Fe I 6302 Angstroms magnetograms and Ca II K images reveals that the bright points occur, without exception, on sites of magnetic flux through the photosphere. However, instances of concentrated and diffuse magnetic flux and Ca II K emission without associated bright points are common, leading to the conclusion that the presence of magnetic flux is a necessary but not sufficient condition for the occurence of resolvable facular bright points. Comparison of the G--band and continuum images shows a complex relation between structures in the two bandwidths: bright points exceeding 350 km in extent correspond to distinct bright structures in the continuum; smaller bright points show no clear relation to continuum structures. Size and contrast statistical cross--comparisons compiled from measurements of over two-thousand bright point structures are presented. Preliminary analysis of the time evolution of bright points in the G--band reveals that the dominant mode of bright point evolution is fission of larger structures into smaller ones and fusion of small structures into conglomerate structures. The characteristic time scale for the fission/fusion process is on the order of minutes.
A Note on the Problem of Proper Time in Weyl Space-Time
NASA Astrophysics Data System (ADS)
Avalos, R.; Dahia, F.; Romero, C.
2018-02-01
We discuss the question of whether or not a general Weyl structure is a suitable mathematical model of space-time. This is an issue that has been in debate since Weyl formulated his unified field theory for the first time. We do not present the discussion from the point of view of a particular unification theory, but instead from a more general standpoint, in which the viability of such a structure as a model of space-time is investigated. Our starting point is the well known axiomatic approach to space-time given by Elhers, Pirani and Schild (EPS). In this framework, we carry out an exhaustive analysis of what is required for a consistent definition for proper time and show that such a definition leads to the prediction of the so-called "second clock effect". We take the view that if, based on experience, we were to reject space-time models predicting this effect, this could be incorporated as the last axiom in the EPS approach. Finally, we provide a proof that, in this case, we are led to a Weyl integrable space-time as the most general structure that would be suitable to model space-time.
Complementing Operating Room Teaching With Video-Based Coaching.
Hu, Yue-Yung; Mazer, Laura M; Yule, Steven J; Arriaga, Alexander F; Greenberg, Caprice C; Lipsitz, Stuart R; Gawande, Atul A; Smink, Douglas S
2017-04-01
Surgical expertise demands technical and nontechnical skills. Traditionally, surgical trainees acquired these skills in the operating room; however, operative time for residents has decreased with duty hour restrictions. As in other professions, video analysis may help maximize the learning experience. To develop and evaluate a postoperative video-based coaching intervention for residents. In this mixed methods analysis, 10 senior (postgraduate year 4 and 5) residents were videorecorded operating with an attending surgeon at an academic tertiary care hospital. Each video formed the basis of a 1-hour one-on-one coaching session conducted by the operative attending; although a coaching framework was provided, participants determined the specific content collaboratively. Teaching points were identified in the operating room and the video-based coaching sessions; iterative inductive coding, followed by thematic analysis, was performed. Teaching points made in the operating room were compared with those in the video-based coaching sessions with respect to initiator, content, and teaching technique, adjusting for time. Among 10 cases, surgeons made more teaching points per unit time (63.0 vs 102.7 per hour) while coaching. Teaching in the video-based coaching sessions was more resident centered; attendings were more inquisitive about residents' learning needs (3.30 vs 0.28, P = .04), and residents took more initiative to direct their education (27% [198 of 729 teaching points] vs 17% [331 of 1977 teaching points], P < .001). Surgeons also more frequently validated residents' experiences (8.40 vs 1.81, P < .01), and they tended to ask more questions to promote critical thinking (9.30 vs 3.32, P = .07) and set more learning goals (2.90 vs 0.28, P = .11). More complex topics, including intraoperative decision making (mean, 9.70 vs 2.77 instances per hour, P = .03) and failure to progress (mean, 1.20 vs 0.13 instances per hour, P = .04) were addressed, and they were more thoroughly developed and explored. Excerpts of dialogue are presented to illustrate these findings. Video-based coaching is a novel and feasible modality for supplementing intraoperative learning. Objective evaluation demonstrates that video-based coaching may be particularly useful for teaching higher-level concepts, such as decision making, and for individualizing instruction and feedback to each resident.
Multi-scale correlations in different futures markets
NASA Astrophysics Data System (ADS)
Bartolozzi, M.; Mellen, C.; di Matteo, T.; Aste, T.
2007-07-01
In the present work we investigate the multiscale nature of the correlations for high frequency data (1 min) in different futures markets over a period of two years, starting on the 1st of January 2003 and ending on the 31st of December 2004. In particular, by using the concept of local Hurst exponent, we point out how the behaviour of this parameter, usually considered as a benchmark for persistency/antipersistency recognition in time series, is largely time-scale dependent in the market context. These findings are a direct consequence of the intrinsic complexity of a system where trading strategies are scale-adaptive. Moreover, our analysis points out different regimes in the dynamical behaviour of the market indices under consideration.
A novel dynamic sensing of wearable digital textile sensor with body motion analysis.
Yang, Chang-Ming; Lin, Zhan-Sheng; Hu, Chang-Lin; Chen, Yu-Shih; Ke, Ling-Yi; Chen, Yin-Rui
2010-01-01
This work proposes an innovative textile sensor system to monitor dynamic body movement and human posture by attaching wearable digital sensors to analyze body motion. The proposed system can display and analyze signals when individuals are walking, running, veering around, walking up and down stairs, as well as falling down with a wearable monitoring system, which reacts to the coordination between the body and feet. Several digital sensor designs are embedded in clothing and wear apparel. Any pressure point can determine which activity is underway. Importantly, wearable digital sensors and a wearable monitoring system allow adaptive, real-time postures, real time velocity, acceleration, non-invasive, transmission healthcare, and point of care (POC) for home and non-clinical environments.
Fine Pointing of Military Spacecraft
2007-03-01
estimate is high. But feedback controls are attempting to fix the attitude at the next time step with error based on the previous time step without using ...52 a. Stability Analysis Consider not using the reference trajectory in the feedback signal. The previous stability proof (Refs.[43],[46]) are no... robust steering law and quaternion feedback control [52]. TASS2 has center-of-gravity offset disturbance that must be countered by the three CMG
Variance Analysis if Unevenly Spaced Time Series Data
1995-12-01
Daka were subsequently removed from mch simulated data set using typical TWSTFT data patterns to create lwo unevenly spaced sets with average...and techniqw are presented for cowecking errors caused by uneven data spacing in typical TWSTFT daka sets. INTRODUCTION Data points obtained from an...the possible data available. In TWSTFT , the task is less daunting: time transfers are typically measured on Monday, Wednesday, and Friday, so, in a
Empirical mode decomposition and long-range correlation analysis of sunspot time series
NASA Astrophysics Data System (ADS)
Zhou, Yu; Leung, Yee
2010-12-01
Sunspots, which are the best known and most variable features of the solar surface, affect our planet in many ways. The number of sunspots during a period of time is highly variable and arouses strong research interest. When multifractal detrended fluctuation analysis (MF-DFA) is employed to study the fractal properties and long-range correlation of the sunspot series, some spurious crossover points might appear because of the periodic and quasi-periodic trends in the series. However many cycles of solar activities can be reflected by the sunspot time series. The 11-year cycle is perhaps the most famous cycle of the sunspot activity. These cycles pose problems for the investigation of the scaling behavior of sunspot time series. Using different methods to handle the 11-year cycle generally creates totally different results. Using MF-DFA, Movahed and co-workers employed Fourier truncation to deal with the 11-year cycle and found that the series is long-range anti-correlated with a Hurst exponent, H, of about 0.12. However, Hu and co-workers proposed an adaptive detrending method for the MF-DFA and discovered long-range correlation characterized by H≈0.74. In an attempt to get to the bottom of the problem in the present paper, empirical mode decomposition (EMD), a data-driven adaptive method, is applied to first extract the components with different dominant frequencies. MF-DFA is then employed to study the long-range correlation of the sunspot time series under the influence of these components. On removing the effects of these periods, the natural long-range correlation of the sunspot time series can be revealed. With the removal of the 11-year cycle, a crossover point located at around 60 months is discovered to be a reasonable point separating two different time scale ranges, H≈0.72 and H≈1.49. And on removing all cycles longer than 11 years, we have H≈0.69 and H≈0.28. The three cycle-removing methods—Fourier truncation, adaptive detrending and the proposed EMD-based method—are further compared, and possible reasons for the different results are given. Two numerical experiments are designed for quantitatively evaluating the performances of these three methods in removing periodic trends with inexact/exact cycles and in detecting the possible crossover points.
NASA Technical Reports Server (NTRS)
Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff
2016-01-01
The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.
A passive pendulum wobble damper for a low spin rate Jupiter flyby spacecraft
NASA Technical Reports Server (NTRS)
Fowler, R. C.
1972-01-01
When the spacecraft has a low spin rate and precise pointing requirements, the wobble angle must be damped in a time period equivalent to a very few wobble cycles. The design, analysis, and test of a passive pendulum wobble damper are described.
48 CFR 252.246-7003 - Notification of Potential Safety Issues.
Code of Federal Regulations, 2013 CFR
2013-10-01
.... Critical safety item means a part, subassembly, assembly, subsystem, installation equipment, or support... impact for systems, or subsystems, assemblies, subassemblies, or parts integral to a system, acquired by... the extent known at the time of notification; (iv) A point of contact to coordinate problem analysis...
48 CFR 252.246-7003 - Notification of Potential Safety Issues.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... Critical safety item means a part, subassembly, assembly, subsystem, installation equipment, or support... impact for systems, or subsystems, assemblies, subassemblies, or parts integral to a system, acquired by... the extent known at the time of notification; (iv) A point of contact to coordinate problem analysis...
48 CFR 252.246-7003 - Notification of Potential Safety Issues.
Code of Federal Regulations, 2011 CFR
2011-10-01
.... Critical safety item means a part, subassembly, assembly, subsystem, installation equipment, or support... impact for systems, or subsystems, assemblies, subassemblies, or parts integral to a system, acquired by... the extent known at the time of notification; (iv) A point of contact to coordinate problem analysis...
48 CFR 252.246-7003 - Notification of Potential Safety Issues.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... Critical safety item means a part, subassembly, assembly, subsystem, installation equipment, or support... impact for systems, or subsystems, assemblies, subassemblies, or parts integral to a system, acquired by... the extent known at the time of notification; (iv) A point of contact to coordinate problem analysis...
48 CFR 252.246-7003 - Notification of Potential Safety Issues.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... Critical safety item means a part, subassembly, assembly, subsystem, installation equipment, or support... impact for systems, or subsystems, assemblies, subassemblies, or parts integral to a system, acquired by... the extent known at the time of notification; (iv) A point of contact to coordinate problem analysis...
A Model of Small Group Facilitator Competencies
ERIC Educational Resources Information Center
Kolb, Judith A.; Jin, Sungmi; Song, Ji Hoon
2008-01-01
This study used small group theory, quantitative and qualitative data collected from experienced practicing facilitators at three points of time, and a building block process of collection, analysis, further collection, and consolidation to develop a model of small group facilitator competencies. The proposed model has five components:…
Time perception in patients with major depressive disorder during vagus nerve stimulation.
Biermann, T; Kreil, S; Groemer, T W; Maihöfner, C; Richter-Schmiedinger, T; Kornhuber, J; Sperling, W
2011-07-01
Affective disorders may affect patients' time perception. Several studies have described time as a function of the frontal lobe. The activating eff ects of vagus nerve stimulation on the frontal lobe might also modulate time perception in patients with major depressive disorder (MDD). Time perception was investigated in 30 patients with MDD and in 7 patients with therapy-resistant MDD. In these 7 patients, a VNS system was implanted and time perception was assessed before and during stimulation. A time estimation task in which patients were asked "How many seconds have passed?" tested time perception at 4 defined time points (34 s, 77 s, 192 s and 230 s). The differences between the estimated and actual durations were calculated and used for subsequent analysis. Patients with MDD and healthy controls estimated the set time points relatively accurately. A general linear model revealed a significant main eff ect of group but not of age or sex. The passing of time was perceived as significantly slower in patients undergoing VNS compared to patients with MDD at all time points (T34: t = − 4.2; df = 35; p < 0.001; T77: t = − 4.8; df = 35; p < 0.001; T192: t = − 2.0; df = 35; p = 0.059; T230 t = −2.2; df = 35; p = 0.039) as well as compared to healthy controls (at only T77: t = 4.1; df = 35; p < 0.001). There were no differences in time perception with regard to age, sex or polarity of depression (uni- or bipolar). VNS is capable of changing the perception of time. This discovery furthers the basic research on circadian rhythms in patients with psychiatric disorders.
Lavender, Jason M; Utzinger, Linsey M; Cao, Li; Wonderlich, Stephen A; Engel, Scott G; Mitchell, James E; Crosby, Ross D
2016-04-01
Although negative affect (NA) has been identified as a common trigger for bulimic behaviors, findings regarding NA following such behaviors have been mixed. This study examined reciprocal associations between NA and bulimic behaviors using real-time, naturalistic data. Participants were 133 women with bulimia nervosa (BN) according to the 4th edition of the Diagnostic and Statistical Manual of Mental Disorders who completed a 2-week ecological momentary assessment protocol in which they recorded bulimic behaviors and provided multiple daily ratings of NA. A multilevel autoregressive cross-lagged analysis was conducted to examine concurrent, first-order autoregressive, and prospective associations between NA, binge eating, and purging across the day. Results revealed positive concurrent associations between all variables across all time points, as well as numerous autoregressive associations. For prospective associations, higher NA predicted subsequent bulimic symptoms at multiple time points; conversely, binge eating predicted lower NA at multiple time points, and purging predicted higher NA at 1 time point. Several autoregressive and prospective associations were also found between binge eating and purging. This study used a novel approach to examine NA in relation to bulimic symptoms, contributing to the existing literature by directly examining the magnitude of the associations, examining differences in the associations across the day, and controlling for other associations in testing each effect in the model. These findings may have relevance for understanding the etiology and/or maintenance of bulimic symptoms, as well as potentially informing psychological interventions for BN. (c) 2016 APA, all rights reserved).
Integrated microfluidic technology for sub-lethal and behavioral marine ecotoxicity biotests
NASA Astrophysics Data System (ADS)
Huang, Yushi; Reyes Aldasoro, Constantino Carlos; Persoone, Guido; Wlodkowic, Donald
2015-06-01
Changes in behavioral traits exhibited by small aquatic invertebrates are increasingly postulated as ethically acceptable and more sensitive endpoints for detection of water-born ecotoxicity than conventional mortality assays. Despite importance of such behavioral biotests, their implementation is profoundly limited by the lack of appropriate biocompatible automation, integrated optoelectronic sensors, and the associated electronics and analysis algorithms. This work outlines development of a proof-of-concept miniaturized Lab-on-a-Chip (LOC) platform for rapid water toxicity tests based on changes in swimming patterns exhibited by Artemia franciscana (Artoxkit M™) nauplii. In contrast to conventionally performed end-point analysis based on counting numbers of dead/immobile specimens we performed a time-resolved video data analysis to dynamically assess impact of a reference toxicant on swimming pattern of A. franciscana. Our system design combined: (i) innovative microfluidic device keeping free swimming Artemia sp. nauplii under continuous microperfusion as a mean of toxin delivery; (ii) mechatronic interface for user-friendly fluidic actuation of the chip; and (iii) miniaturized video acquisition for movement analysis of test specimens. The system was capable of performing fully programmable time-lapse and video-microscopy of multiple samples for rapid ecotoxicity analysis. It enabled development of a user-friendly and inexpensive test protocol to dynamically detect sub-lethal behavioral end-points such as changes in speed of movement or distance traveled by each animal.
Instantons in Self-Organizing Logic Gates
NASA Astrophysics Data System (ADS)
Bearden, Sean R. B.; Manukian, Haik; Traversa, Fabio L.; Di Ventra, Massimiliano
2018-03-01
Self-organizing logic is a recently suggested framework that allows the solution of Boolean truth tables "in reverse"; i.e., it is able to satisfy the logical proposition of gates regardless to which terminal(s) the truth value is assigned ("terminal-agnostic logic"). It can be realized if time nonlocality (memory) is present. A practical realization of self-organizing logic gates (SOLGs) can be done by combining circuit elements with and without memory. By employing one such realization, we show, numerically, that SOLGs exploit elementary instantons to reach equilibrium points. Instantons are classical trajectories of the nonlinear equations of motion describing SOLGs and connect topologically distinct critical points in the phase space. By linear analysis at those points, we show that these instantons connect the initial critical point of the dynamics, with at least one unstable direction, directly to the final fixed point. We also show that the memory content of these gates affects only the relaxation time to reach the logically consistent solution. Finally, we demonstrate, by solving the corresponding stochastic differential equations, that, since instantons connect critical points, noise and perturbations may change the instanton trajectory in the phase space but not the initial and final critical points. Therefore, even for extremely large noise levels, the gates self-organize to the correct solution. Our work provides a physical understanding of, and can serve as an inspiration for, models of bidirectional logic gates that are emerging as important tools in physics-inspired, unconventional computing.