Atomic temporal interval relations in branching time: calculation and application
NASA Astrophysics Data System (ADS)
Anger, Frank D.; Ladkin, Peter B.; Rodriguez, Rita V.
1991-03-01
A practical method of reasoning about intervals in a branching-time model which is dense, unbounded, future-branching, without rejoining branches is presented. The discussion is based on heuristic constraint- propagation techniques using the relation algebra of binary temporal relations among the intervals over the branching-time model. This technique has been applied with success to models of intervals over linear time by Allen and others, and is of cubic-time complexity. To extend it to branding-time models, it is necessary to calculate compositions of the relations; thus, the table of compositions for the 'atomic' relations is computed, enabling the rapid determination of the composition of arbitrary relations, expressed as disjunctions or unions of the atomic relations.
40 CFR 1065.550 - Gas analyzer range validation, drift validation, and drift correction.
Code of Federal Regulations, 2010 CFR
2010-07-01
... interval (i.e., do not set them to zero). A third calculation of composite brake-specific emission values... from each test interval and sets any negative mass (or mass rate) values to zero before calculating the... value is less than the standard by at least two times the absolute difference between the uncorrected...
Entanglement negativity after a local quantum quench in conformal field theories
NASA Astrophysics Data System (ADS)
Wen, Xueda; Chang, Po-Yao; Ryu, Shinsei
2015-08-01
We study the time evolution of the entanglement negativity after a local quantum quench in (1 + 1)-dimensional conformal field theories (CFTs), which we introduce by suddenly joining two initially decoupled CFTs at their end points. We calculate the negativity evolution for both adjacent intervals and disjoint intervals explicitly. For two adjacent intervals, the entanglement negativity grows logarithmically in time right after the quench. After developing a plateau-like feature, the entanglement negativity drops to the ground-state value. For the case of two spatially separated intervals, a light-cone behavior is observed in the negativity evolution; in addition, a long-range entanglement, which is independent of the distance between two intervals, can be created. Our results agree with the heuristic picture that quasiparticles, which carry entanglement, are emitted from the joining point and propagate freely through the system. Our analytical results are confirmed by numerical calculations based on a critical harmonic chain.
Disassembly time of deuterium-cluster-fusion plasma irradiated by an intense laser pulse.
Bang, W
2015-07-01
Energetic deuterium ions from large deuterium clusters (>10nm diameter) irradiated by an intense laser pulse (>10(16)W/cm(2)) produce DD fusion neutrons for a time interval determined by the geometry of the resulting fusion plasma. We present an analytical solution of this time interval, the plasma disassembly time, for deuterium plasmas that are cylindrical in shape. Assuming a symmetrically expanding deuterium plasma, we calculate the expected fusion neutron yield and compare with an independent calculation of the yield using the concept of a finite confinement time at a fixed plasma density. The calculated neutron yields agree quantitatively with the available experimental data. Our one-dimensional simulations indicate that one could expect a tenfold increase in total neutron yield by magnetically confining a 10-keV deuterium fusion plasma for 10ns.
Disassembly time of deuterium-cluster-fusion plasma irradiated by an intense laser pulse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bang, W.
Energetic deuterium ions from large deuterium clusters (>10 nm diameter) irradiated by an intense laser pulse (>10¹⁶ W/cm²) produce DD fusion neutrons for a time interval determined by the geometry of the resulting fusion plasma. We show an analytical solution of this time interval, the plasma disassembly time, for deuterium plasmas that are cylindrical in shape. Assuming a symmetrically expanding deuterium plasma, we calculate the expected fusion neutron yield and compare with an independent calculation of the yield using the concept of a finite confinement time at a fixed plasma density. The calculated neutron yields agree quantitatively with the availablemore » experimental data. Our one-dimensional simulations indicate that one could expect a tenfold increase in total neutron yield by magnetically confining a 10 - keV deuterium fusion plasma for 10 ns.« less
Disassembly time of deuterium-cluster-fusion plasma irradiated by an intense laser pulse
Bang, W.
2015-07-02
Energetic deuterium ions from large deuterium clusters (>10 nm diameter) irradiated by an intense laser pulse (>10¹⁶ W/cm²) produce DD fusion neutrons for a time interval determined by the geometry of the resulting fusion plasma. We show an analytical solution of this time interval, the plasma disassembly time, for deuterium plasmas that are cylindrical in shape. Assuming a symmetrically expanding deuterium plasma, we calculate the expected fusion neutron yield and compare with an independent calculation of the yield using the concept of a finite confinement time at a fixed plasma density. The calculated neutron yields agree quantitatively with the availablemore » experimental data. Our one-dimensional simulations indicate that one could expect a tenfold increase in total neutron yield by magnetically confining a 10 - keV deuterium fusion plasma for 10 ns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, E.J.
1976-02-01
A computer program is described which calculates nuclide concentration histories, power or neutron flux histories, burnups, and fission-product birthrates for fueled experimental capsules subjected to neutron irradiations. Seventeen heavy nuclides in the chain from $sup 232$Th to $sup 242$Pu and a user- specified number of fission products are treated. A fourth-order Runge-Kutta calculational method solves the differential equations for nuclide concentrations as a function of time. For a particular problem, a user-specified number of fuel regions may be treated. A fuel region is described by volume, length, and specific irradiation history. A number of initial fuel compositions may be specifiedmore » for each fuel region. The irradiation history for each fuel region can be divided into time intervals, and a constant power density or a time-dependent neutron flux is specified for each time interval. Also, an independent cross- section set may be selected for each time interval in each irradiation history. The fission-product birthrates for the first composition of each fuel region are summed to give the total fission-product birthrates for the problem.« less
Recurrence time statistics for finite size intervals
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.
2004-12-01
We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.
ERIC Educational Resources Information Center
Nunn, John
2014-01-01
This paper describes how a microphone plugged in to a normal computer can be used to record the impacts of a ball bouncing on a table. The intervals between these impacts represent the "time of flight" of the ball. Since some energy is lost in each rebound, the time intervals get progressively smaller. Through calculation it is possible…
The orbital evolution of the asteroid 4179 Toutatis during 11,550 years
NASA Astrophysics Data System (ADS)
Zausaev, A. F.; Pushkaryov, A. N.
The orbital evolution of the asteroid 4179 Toutatis was followed by the Everhart method in the time interval 2250 AD to 9300 BC. The closest encounters with Earth are calculated in the evolution process. It is shown that this asteroid is not dangerous for Earth during the time interval 2250 AD to 9300 BC.
Using a Programmable Calculator to Teach Teophylline Pharmacokinetics.
ERIC Educational Resources Information Center
Closson, Richard Grant
1981-01-01
A calculator program for a Texas Instruments Model 59 to predict serum theophylline concentrations is described. The program accommodates the input of multiple dose times at irregular intervals, clearance changes due to concurrent patient diseases and age less than 17 years. The calculations for five hypothetical patients are given. (Author/MLW)
A New Evaluation Method of Stored Heat Effect of Reinforced Concrete Wall of Cold Storage
NASA Astrophysics Data System (ADS)
Nomura, Tomohiro; Murakami, Yuji; Uchikawa, Motoyuki
Today it has become imperative to save energy by operating a refrigerator in a cold storage executed by external insulate reinforced concrete wall intermittently. The theme of the paper is to get the evaluation method to be capable of calculating, numerically, interval time for stopping the refrigerator, in applying reinforced concrete wall as source of stored heat. The experiments with the concrete models were performed in order to examine the time variation of internal temperature after refrigerator stopped. In addition, the simulation method with three dimensional unsteady FEM for personal-computer type was introduced for easily analyzing the internal temperature variation. Using this method, it is possible to obtain the time variation of internal temperature and to calculate the interval time for stopping the refrigerator.
Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.
Ćwik, Michał; Józefczyk, Jerzy
2018-01-01
An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
Kowalik, Grzegorz T; Knight, Daniel S; Steeden, Jennifer A; Tann, Oliver; Odille, Freddy; Atkinson, David; Taylor, Andrew; Muthurangu, Vivek
2015-02-01
To develop a real-time phase contrast MR sequence with high enough temporal resolution to assess cardiac time intervals. The sequence utilized spiral trajectories with an acquisition strategy that allowed a combination of temporal encoding (Unaliasing by fourier-encoding the overlaps using the temporal dimension; UNFOLD) and parallel imaging (Sensitivity encoding; SENSE) to be used (UNFOLDed-SENSE). An in silico experiment was performed to determine the optimum UNFOLD filter. In vitro experiments were carried out to validate the accuracy of time intervals calculation and peak mean velocity quantification. In addition, 15 healthy volunteers were imaged with the new sequence, and cardiac time intervals were compared to reference standard Doppler echocardiography measures. For comparison, in silico, in vitro, and in vivo experiments were also carried out using sliding window reconstructions. The in vitro experiments demonstrated good agreement between real-time spiral UNFOLDed-SENSE phase contrast MR and the reference standard measurements of velocity and time intervals. The protocol was successfully performed in all volunteers. Subsequent measurement of time intervals produced values in keeping with literature values and good agreement with the gold standard echocardiography. Importantly, the proposed UNFOLDed-SENSE sequence outperformed the sliding window reconstructions. Cardiac time intervals can be successfully assessed with UNFOLDed-SENSE real-time spiral phase contrast. Real-time MR assessment of cardiac time intervals may be beneficial in assessment of patients with cardiac conditions such as diastolic dysfunction. © 2014 Wiley Periodicals, Inc.
NORSAR Detection Processing System.
1987-05-31
systems have been reliable. NTA/Lillestrom and Hamar will take a new initiative medio April regarding 04C. The line will be remeasured and if a certain...estimate of the ambient noise level at the site of the FINESA array, ground motion spectra were calculated for four time intervals. Two intervals were
Lecheta, Melise Cristine; Thyssen, Patricia Jacqueline; Moura, Mauricio Osvaldo
2015-12-01
The blowfly Sarconesia chlorogaster (Diptera: Calliphoridae) is of limited forensic use in South America, due to the poorly known relationship between development time and temperature. The purpose of this study was to determine development time of S. chlorogaster at different constant temperatures, thereby enabling the forensic use of this fly. Development time of this species was examined by observing larval development at six temperatures (10, 15, 20, 25, 30, 35 °C). The thermal constant (K), the minimum development threshold (t 0), and development rate were calculated using linear regressions of the development time interval at five temperatures (10-30 °C). Development interval from egg to adult varied from 14.2 to 95.2 days, depending on temperature. The t0 calculated for total immature development is 6.33 °C and the overall thermal constant is 355.51 degree-days (DD). Temperature affected the viability of pupae, at 35 °C 100 % mortality was observed. Understanding development rate across these temperatures now makes development of S. chlorogaster a forensically useful tool for estimating postmortem interval.
Have the temperature time series a structural change after 1998?
NASA Astrophysics Data System (ADS)
Werner, Rolf; Valev, Dimitare; Danov, Dimitar
2012-07-01
The global and hemisphere temperature GISS and Hadcrut3 time series were analysed for structural changes. We postulate the continuity of the preceding temperature function depending from the time. The slopes are calculated for a sequence of segments limited by time thresholds. We used a standard method, the restricted linear regression with dummy variables. We performed the calculations and tests for different number of thresholds. The thresholds are searched continuously in determined time intervals. The F-statistic is used to obtain the time points of the structural changes.
An operational definition of a statistically meaningful trend.
Bryhn, Andreas C; Dimberg, Peter H
2011-04-28
Linear trend analysis of time series is standard procedure in many scientific disciplines. If the number of data is large, a trend may be statistically significant even if data are scattered far from the trend line. This study introduces and tests a quality criterion for time trends referred to as statistical meaningfulness, which is a stricter quality criterion for trends than high statistical significance. The time series is divided into intervals and interval mean values are calculated. Thereafter, r(2) and p values are calculated from regressions concerning time and interval mean values. If r(2) ≥ 0.65 at p ≤ 0.05 in any of these regressions, then the trend is regarded as statistically meaningful. Out of ten investigated time series from different scientific disciplines, five displayed statistically meaningful trends. A Microsoft Excel application (add-in) was developed which can perform statistical meaningfulness tests and which may increase the operationality of the test. The presented method for distinguishing statistically meaningful trends should be reasonably uncomplicated for researchers with basic statistics skills and may thus be useful for determining which trends are worth analysing further, for instance with respect to causal factors. The method can also be used for determining which segments of a time trend may be particularly worthwhile to focus on.
Cornforth, David J; Tarvainen, Mika P; Jelinek, Herbert F
2014-01-01
Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN.
Cornforth, David J.; Tarvainen, Mika P.; Jelinek, Herbert F.
2014-01-01
Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN. PMID:25250311
40 CFR 1065.550 - Gas analyzer range validation, drift validation, and drift correction.
Code of Federal Regulations, 2011 CFR
2011-07-01
... given test interval (i.e., do not set them to zero). A third calculation of composite brake-specific...) values from each test interval and sets any negative mass (or mass rate) values to zero before... less than the standard by at least two times the absolute difference between the uncorrected and...
Tremblay, Marc; Vézina, Hélène
2000-01-01
Summary Intergenerational time intervals are frequently used in human population-genetics studies concerned with the ages and origins of mutations. In most cases, mean intervals of 20 or 25 years are used, regardless of the demographic characteristics of the population under study. Although these characteristics may vary from prehistoric to historical times, we suggest that this value is probably too low, and that the ages of some mutations may have been underestimated. Analyses were performed by using the BALSAC Population Register (Quebec, Canada), from which several intergenerational comparisons can be made. Family reconstitutions were used to measure interval lengths and variations in descending lineages. Various parameters were considered, such as spouse age at marriage, parental age, and reproduction levels. Mother-child and father-child intervals were compared. Intergenerational male and female intervals were also analyzed in 100 extended ascending genealogies. Results showed that a mean value of 30 years is a better estimate of intergenerational intervals than 20 or 25 years. As marked differences between male and female interval length were observed, specific values are proposed for mtDNA, autosomal, X-chromosomal, and Y-chromosomal loci. The applicability of these results for age estimates of mutations is discussed. PMID:10677323
Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.
2007-01-01
Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.
Maat, Barbara; Rademaker, Carin M A; Oostveen, Marloes I; Krediet, Tannette G; Egberts, Toine C G; Bollen, Casper W
2013-01-01
Prescribing glucose requires complex calculations because glucose is present in parenteral and enteral nutrition and drug vehicles, making it error prone and contributing to the burden of prescribing errors. Evaluation of the impact of a computerized physician order entry (CPOE) system with clinical decision support (CDS) for glucose control in neonatal intensive care patients (NICU) focusing on hypo- and hyperglycemic episodes and prescribing time efficiency. An interrupted time-series design to examine the effect of CPOE on hypo- and hyperglycemias and a crossover simulation study to examine the influence of CPOE on prescribing time efficiency. NICU patients at risk for glucose imbalance hospitalized at the University Medical Center Utrecht during 2001-2007 were selected. The risks of hypo- and hyperglycemias were expressed as incidences per 100 patient days in consecutive 3-month intervals during 3 years before and after CPOE implementation. To assess prescribing time efficiency, time needed to calculate glucose intake with and without CPOE was measured. No significant difference was found between pre- and post-CPOE mean incidences of hypo- and hyperglycemias per 100 hospital days of neonates at risk in every 3-month period (hypoglycemias, 4.0 [95% confidence interval, 3.2-4.8] pre-CPOE and 3.1 [2.7-3.5] post-CPOE, P = .88; hyperglycemias, 6.0 [4.3-7.7] pre-CPOE and 5.0 [3.7-6.3] post-CPOE, P = .75). CPOE led to a significant time reduction of 16% (1.3 [0.3-2.3] minutes) for simple and 60% (8.6 [5.1-12.1] minutes) for complex calculations. CPOE including a special CDS tool preserved accuracy for calculation and control of glucose intake and increased prescribing time efficiency.
Measures of native and non-native rhythm in a quantity language.
Stockmal, Verna; Markus, Dace; Bond, Dzintra
2005-01-01
The traditional phonetic classification of language rhythm as stress-timed or syllable-timed is attributed to Pike. Recently, two different proposals have been offered for describing the rhythmic structure of languages from acoustic-phonetic measurements. Ramus has suggested a metric based on the proportion of vocalic intervals and the variability (SD) of consonantal intervals. Grabe has proposed Pairwise Variability Indices (nPVI, rPVI) calculated from the differences in vocalic and consonantal durations between successive syllables. We have calculated both the Ramus and Grabe metrics for Latvian, traditionally considered a syllable rhythm language, and for Latvian as spoken by Russian learners. Native speakers and proficient learners were very similar whereas low-proficiency learners showed high variability on some properties. The metrics did not provide an unambiguous classification of Latvian.
Doumouras, Aristithes G; Gomez, David; Haas, Barbara; Boyes, Donald M; Nathens, Avery B
2012-09-01
The regionalization of medical services has resulted in improved outcomes and greater compliance with existing guidelines. For certain "time-critical" conditions intimately associated with emergency medicine, early intervention has demonstrated mortality benefits. For these conditions, then, appropriate triage within a regionalized system at first diagnosis is paramount, ideally occurring in the field by emergency medical services (EMS) personnel. Therefore, EMS ground transport access is an important metric in the ongoing evaluation of a regionalized care system for time-critical emergency services. To our knowledge, no studies have demonstrated how methodologies for calculating EMS ground transport access differ in their estimates of access over the same study area for the same resource. This study uses two methodologies to calculate EMS ground transport access to trauma center care in a single study area to explore their manifestations and critically evaluate the differences between the methodologies. Two methodologies were compared in their estimations of EMS ground transport access to trauma center care: a routing methodology (RM) and an as-the-crow-flies methodology (ACFM). These methodologies were adaptations of the only two methodologies that had been previously used in the literature to calculate EMS ground transport access to time-critical emergency services across the United States. The RM and ACFM were applied to the nine Level I and Level II trauma centers within the province of Ontario by creating trauma center catchment areas at 30, 45, 60, and 120 minutes and calculating the population and area encompassed by the catchments. Because the methodologies were identical for measuring air access, this study looks specifically at EMS ground transport access. Catchments for the province were created for each methodology at each time interval, and their populations and areas were significantly different at all time periods. Specifically, the RM calculated significantly larger populations at every time interval while the ACFM calculated larger catchment area sizes. This trend is counterintuitive (i.e., larger catchment should mean higher populations), and it was found to be most disparate at the shortest time intervals (under 60 minutes). Through critical evaluation of the differences, the authors elucidated that the ACFM could calculate road access in areas with no roads and overestimates access in low-density areas compared to the RM, potentially affecting delivery of care decisions. Based on these results, the authors believe that future methodologies for calculating EMS ground transport access must incorporate a continuous and valid route through the road network as well as use travel speeds appropriate to the road segments traveled; alternatively, we feel that variation in methods for calculating road distances would have little effect on realized access. Overall, as more complex models for calculating EMS ground transport access become used, there needs to be a standard methodology to improve and to compare it to. Based on these findings, the authors believe that this should be the RM. © 2012 by the Society for Academic Emergency Medicine.
Non-Markovianity quantifier of an arbitrary quantum process
NASA Astrophysics Data System (ADS)
Debarba, Tiago; Fanchini, Felipe F.
2017-12-01
Calculating the degree of non-Markovianity of a quantum process, for a high-dimensional system, is a difficult task given complex maximization problems. Focusing on the entanglement-based measure of non-Markovianity we propose a numerically feasible quantifier for finite-dimensional systems. We define the non-Markovianity measure in terms of a class of entanglement quantifiers named witnessed entanglement which allow us to write several entanglement based measures of non-Markovianity in a unique formalism. In this formalism, we show that the non-Markovianity, in a given time interval, can be witnessed by calculating the expectation value of an observable, making it attractive for experimental investigations. Following this property we introduce a quantifier base on the entanglement witness in an interval of time; we show that measure is a bonafide measure of non-Markovianity. In our example, we use the generalized robustness of entanglement, an entanglement measure that can be readily calculated by a semidefinite programming method, to study impurity atoms coupled to a Bose-Einstein condensate.
Interval Management with Spacing to Parallel Dependent Runways (IMSPIDR) Experiment and Results
NASA Technical Reports Server (NTRS)
Baxley, Brian T.; Swieringa, Kurt A.; Capron, William R.
2012-01-01
An area in aviation operations that may offer an increase in efficiency is the use of continuous descent arrivals (CDA), especially during dependent parallel runway operations. However, variations in aircraft descent angle and speed can cause inaccuracies in estimated time of arrival calculations, requiring an increase in the size of the buffer between aircraft. This in turn reduces airport throughput and limits the use of CDAs during high-density operations, particularly to dependent parallel runways. The Interval Management with Spacing to Parallel Dependent Runways (IMSPiDR) concept uses a trajectory-based spacing tool onboard the aircraft to achieve by the runway an air traffic control assigned spacing interval behind the previous aircraft. This paper describes the first ever experiment and results of this concept at NASA Langley. Pilots flew CDAs to the Dallas Fort-Worth airport using airspeed calculations from the spacing tool to achieve either a Required Time of Arrival (RTA) or Interval Management (IM) spacing interval at the runway threshold. Results indicate flight crews were able to land aircraft on the runway with a mean of 2 seconds and less than 4 seconds standard deviation of the air traffic control assigned time, even in the presence of forecast wind error and large time delay. Statistically significant differences in delivery precision and number of speed changes as a function of stream position were observed, however, there was no trend to the difference and the error did not increase during the operation. Two areas the flight crew indicated as not acceptable included the additional number of speed changes required during the wind shear event, and issuing an IM clearance via data link while at low altitude. A number of refinements and future spacing algorithm capabilities were also identified.
CI2 for creating and comparing confidence-intervals for time-series bivariate plots.
Mullineaux, David R
2017-02-01
Currently no method exists for calculating and comparing the confidence-intervals (CI) for the time-series of a bivariate plot. The study's aim was to develop 'CI2' as a method to calculate the CI on time-series bivariate plots, and to identify if the CI between two bivariate time-series overlap. The test data were the knee and ankle angles from 10 healthy participants running on a motorised standard-treadmill and non-motorised curved-treadmill. For a recommended 10+ trials, CI2 involved calculating 95% confidence-ellipses at each time-point, then taking as the CI the points on the ellipses that were perpendicular to the direction vector between the means of two adjacent time-points. Consecutive pairs of CI created convex quadrilaterals, and any overlap of these quadrilaterals at the same time or ±1 frame as a time-lag calculated using cross-correlations, indicated where the two time-series differed. CI2 showed no group differences between left and right legs on both treadmills, but the same legs between treadmills for all participants showed differences of less knee extension on the curved-treadmill before heel-strike. To improve and standardise the use of CI2 it is recommended to remove outlier time-series, use 95% confidence-ellipses, and scale the ellipse by the fixed Chi-square value as opposed to the sample-size dependent F-value. For practical use, and to aid in standardisation or future development of CI2, Matlab code is provided. CI2 provides an effective method to quantify the CI of bivariate plots, and to explore the differences in CI between two bivariate time-series. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Glazner, Allen F.; Sadler, Peter M.
2016-12-01
The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is
Reference values of thirty-one frequently used laboratory markers for 75-year-old males and females
Ryden, Ingvar; Lind, Lars
2012-01-01
Background We have previously reported reference values for common clinical chemistry tests in healthy 70-year-old males and females. We have now repeated this study 5 years later to establish reference values also at the age of 75. It is important to have adequate reference values for elderly patients as biological markers may change over time, and adequate reference values are essential for correct clinical decisions. Methods We have investigated 31 frequently used laboratory markers in 75-year-old males (n = 354) and females (n = 373) without diabetes. The 2.5 and 97.5 percentiles for these markers were calculated according to the recommendations of the International Federation of Clinical Chemistry. Results Reference values are reported for 75-year-old males and females for 31 frequently used laboratory markers. Conclusion There were minor differences between reference intervals calculated with and without individuals with cardiovascular diseases. Several of the reference intervals differed from Scandinavian reference intervals based on younger individuals (Nordic Reference Interval Project). PMID:22300333
Landes, Reid D.; Lensing, Shelly Y.; Kodell, Ralph L.; Hauer-Jensen, Martin
2014-01-01
The dose of a substance that causes death in P% of a population is called an LDP, where LD stands for lethal dose. In radiation research, a common LDP of interest is the radiation dose that kills 50% of the population by a specified time, i.e., lethal dose 50 or LD50. When comparing LD50 between two populations, relative potency is the parameter of interest. In radiation research, this is commonly known as the dose reduction factor (DRF). Unfortunately, statistical inference on dose reduction factor is seldom reported. We illustrate how to calculate confidence intervals for dose reduction factor, which may then be used for statistical inference. Further, most dose reduction factor experiments use hundreds, rather than tens of animals. Through better dosing strategies and the use of a recently available sample size formula, we also show how animal numbers may be reduced while maintaining high statistical power. The illustrations center on realistic examples comparing LD50 values between a radiation countermeasure group and a radiation-only control. We also provide easy-to-use spreadsheets for sample size calculations and confidence interval calculations, as well as SAS® and R code for the latter. PMID:24164553
van Gelder, Berry M; Meijer, Albert; Bracke, Frank A
2008-09-01
We compared the calculated optimal V-V interval derived from intracardiac electrograms (IEGM) with the optimized V-V interval determined by invasive measurement of LVdP/dt(MAX). Thirty-two patients with heart failure (six females, ages 68 +/- 7.8 years) had a CRT device implanted. After implantation of the atrial, right and a left ventricular lead, the optimal V-V interval was calculated using the QuickOpt formula (St. Jude Medical, Sylmar, CA, USA) applied to the respective IEGM recordings (V-V(IEGM)), and also determined by invasive measurement of LVdP/dt(MAX) (V-V(dP/dt)). The optimal V-V(IEGM) and V-V(dP/dt) intervals were 52.7 +/- 18 ms and 24.0 +/- 33 ms, respectively (P = 0.017), without correlation between the two. The baseline LVdP/dt(MAX) was 748 +/- 191 mmHg/s. The mean value of LVdP/dt(MAX) at invasive optimization was 947 +/- 198 mmHg/s, and at the calculated optimal V-V(IEGM) interval 920 +/- 191 mmHg/s (P < 0.0001). In spite of this significant difference, there was a good correlation between both methods (R = 0.991, P < 0.0001). However, a similarly good correlation existed between the maximum value of LVdP/dt(MAX) and LVdP/dt(MAX) at a fixed V-V interval of 0 ms (R = 0.993, P < 0.0001), or LVdP/dt(MAX) at a randomly selected V-V interval between 0 and +80 ms (R = 0.991, P < 0.0001). Optimizing the V-V interval with the IEGM method does not yield better hemodynamic results than simultaneous BiV pacing. Although a good correlation between LVdP/dt(MAX) determined with V-V(IEGM) and V-V(dP/dt) can be constructed, there is no correlation with the optimal settings of V-V interval in the individual patient.
Gillard, Jonathan
2015-12-01
This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects. © The Author(s) 2011.
NASA Astrophysics Data System (ADS)
Lee, Zoe; Baas, Andreas
2013-04-01
It is widely recognised that boundary layer turbulence plays an important role in sediment transport dynamics in aeolian environments. Improvements in the design and affordability of ultrasonic anemometers have provided significant contributions to studies of aeolian turbulence, by facilitating high frequency monitoring of three dimensional wind velocities. Consequently, research has moved beyond studies of mean airflow properties, to investigations into quasi-instantaneous turbulent fluctuations at high spatio-temporal scales. To fully understand, how temporal fluctuations in shear stress drive wind erosivity and sediment transport, research into the best practice for calculating shear stress is necessary. This paper builds upon work published by Lee and Baas (2012) on the influence of streamline correction techniques on Reynolds shear stress, by investigating the time-averaging interval used in the calculation. Concerns relating to the selection of appropriate averaging intervals for turbulence research, where the data are typically non-stationary at all timescales, are well documented in the literature (e.g. Treviño and Andreas, 2000). For example, Finnigan et al. (2003) found that underestimating the required averaging interval can lead to a reduction in the calculated momentum flux, as contributions from turbulent eddies longer than the averaging interval are lost. To avoid the risk of underestimating fluxes, researchers have typically used the total measurement duration as a single averaging period. For non-stationary data, however, using the whole measurement run as a single block average is inadequate for defining turbulent fluctuations. The data presented in this paper were collected in a field study of boundary layer turbulence conducted at Tramore beach near Rosapenna, County Donegal, Ireland. High-frequency (50 Hz) 3D wind velocity measurements were collected using ultrasonic anemometry at thirteen different heights between 0.11 and 1.62 metres above the bed. A technique for determining time-averaging intervals for a series of anemometers stacked in a close vertical array is presented. A minimum timescale is identified using spectral analysis to determine the inertial sub-range, where energy is neither produced nor dissipated but passed down to increasingly smaller scales. An autocorrelation function is then used to derive a scaling pattern between anemometer heights, which defines a series of averaging intervals of increasing length with height above the surface. Results demonstrate the effect of different averaging intervals on the calculation of Reynolds shear stress and highlight the inadequacy of using the total measurement duration as a single block average. Lee, Z. S. & Baas, A. C. W. (2012). Streamline correction for the analysis of boundary layer turbulence. Geomorphology, 171-172, 69-82. Treviño, G. and Andreas, E.L., 2000. Averaging Intervals For Spectral Analysis Of Nonstationary Turbulence. Boundary-Layer Meteorology, 95(2): 231-247. Finnigan, J.J., Clement, R., Malhi, Y., Leuning, R. and Cleugh, H.A., 2003. Re-evaluation of long-term flux measurement techniques. Part I: Averaging and coordinate rotation. Boundary-Layer Meteorology, 107(1): 1-48.
Is walking a random walk? Evidence for long-range correlations in stride interval of human gait
NASA Technical Reports Server (NTRS)
Hausdorff, Jeffrey M.; Peng, C.-K.; Ladin, Zvi; Wei, Jeanne Y.; Goldberger, Ary L.
1995-01-01
Complex fluctuation of unknown origin appear in the normal gait pattern. These fluctuations might be described as being (1) uncorrelated white noise, (2) short-range correlations, or (3) long-range correlations with power-law scaling. To test these possibilities, the stride interval of 10 healthy young men was measured as they walked for 9 min at their usual rate. From these time series we calculated scaling indexes by using a modified random walk analysis and power spectral analysis. Both indexes indicated the presence of long-range self-similar correlations extending over hundreds of steps; the stride interval at any time depended on the stride interval at remote previous times, and this dependence decayed in a scale-free (fractallike) power-law fashion. These scaling indexes were significantly different from those obtained after random shuffling of the original time series, indicating the importance of the sequential ordering of the stride interval. We demonstrate that conventional models of gait generation fail to reproduce the observed scaling behavior and introduce a new type of central pattern generator model that sucessfully accounts for the experimentally observed long-range correlations.
Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis
2009-02-01
Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.
NASA Technical Reports Server (NTRS)
Wilson, S. W.
1976-01-01
The HP-9810A calculator programs described provide the capability to generate HP-9862A plotter displays which depict the apparent motion of a free-flying cyclindrical payload relative to the shuttle orbiter body axes by projecting the payload geometry into the orbiter plane of symmetry at regular time intervals.
Detection of Outliers in TWSTFT Data Used in TAI
2009-11-01
41st Annual Precise Time and Time Interval (PTTI) Meeting 421 DETECTION OF OUTLIERS IN TWSTFT DATA USED IN TAI A...data in two-way satellite time and frequency transfer ( TWSTFT ) time links. In the case of TWSTFT data used to calculate International Atomic Time...data; that TWSTFT links can show an underlying slope which renders the standard treatment more difficult. Using phase and frequency filtering
The influence of interpregnancy interval on infant mortality.
McKinney, David; House, Melissa; Chen, Aimin; Muglia, Louis; DeFranco, Emily
2017-03-01
In Ohio, the infant mortality rate is above the national average and the black infant mortality rate is more than twice the white infant mortality rate. Having a short interpregnancy interval has been shown to correlate with preterm birth and low birthweight, but the effect of short interpregnancy interval on infant mortality is less well established. We sought to quantify the population impact of interpregnancy interval on the risk of infant mortality. This was a statewide population-based retrospective cohort study of all births (n = 1,131,070) and infant mortalities (n = 8152) using linked Ohio birth and infant death records from January 2007 through September 2014. For this study we analyzed 5 interpregnancy interval categories: 0-<6, 6-<12, 12-<24, 24-<60, and ≥60 months. The primary outcome for this study was infant mortality. During the study period, 3701 infant mortalities were linked to a live birth certificate with an interpregnancy interval available. We calculated the frequency and relative risk of infant mortality for each interval compared to a referent interval of 12-<24 months. Stratified analyses by maternal race were also performed. Adjusted risks were estimated after accounting for statistically significant and biologically plausible confounding variables. Adjusted relative risk was utilized to calculate the attributable risk percent of short interpregnancy intervals on infant mortality. Short interpregnancy intervals were common in Ohio during the study period. Of all multiparous births, 20.5% followed an interval of <12 months. The overall infant mortality rate during this time was 7.2 per 1000 live births (6.0 for white mothers and 13.1 for black mothers). Infant mortalities occurred more frequently for births following short intervals of 0-<6 months (9.2 per 1000) and 6-<12 months (7.1 per 1000) compared to 12-<24 months (5.6 per 1000) (P < .001 and <.001). The highest risk for infant mortality followed interpregnancy intervals of 0-<6 months (adjusted relative risk, 1.32; 95% confidence interval, 1.17-1.49) followed by interpregnancy intervals of 6-<12 months (adjusted relative risk, 1.16; 95% confidence interval, 1.04-1.30). Analysis stratified by maternal race revealed similar findings. Attributable risk calculation showed that 24.2% of infant mortalities following intervals of 0-<6 months and 14.1% with intervals of 6-<12 months are attributable to the short interpregnancy interval. By avoiding short interpregnancy intervals of ≤12 months we estimate that in the state of Ohio 31 infant mortalities (20 white and 8 black) per year could have been prevented and the infant mortality rate could have been reduced from 7.2-7.0 during this time frame. An interpregnancy interval of 12-60 months (1-5 years) between birth and conception of next pregnancy is associated with lowest risk of infant mortality. Public health initiatives and provider counseling to optimize birth spacing has the potential to significantly reduce infant mortality for both white and black mothers. Copyright © 2017 Elsevier Inc. All rights reserved.
Santana, Victor M; Alday, Josu G; Lee, HyoHyeMi; Allen, Katherine A; Marrs, Rob H
2016-01-01
A present challenge in fire ecology is to optimize management techniques so that ecological services are maximized and C emissions minimized. Here, we modeled the effects of different prescribed-burning rotation intervals and wildfires on carbon emissions (present and future) in British moorlands. Biomass-accumulation curves from four Calluna-dominated ecosystems along a north-south gradient in Great Britain were calculated and used within a matrix-model based on Markov Chains to calculate above-ground biomass-loads and annual C emissions under different prescribed-burning rotation intervals. Additionally, we assessed the interaction of these parameters with a decreasing wildfire return intervals. We observed that litter accumulation patterns varied between sites. Northern sites (colder and wetter) accumulated lower amounts of litter with time than southern sites (hotter and drier). The accumulation patterns of the living vegetation dominated by Calluna were determined by site-specific conditions. The optimal prescribed-burning rotation interval for minimizing annual carbon emissions also differed between sites: the optimal rotation interval for northern sites was between 30 and 50 years, whereas for southern sites a hump-backed relationship was found with the optimal interval either between 8 to 10 years or between 30 to 50 years. Increasing wildfire frequency interacted with prescribed-burning rotation intervals by both increasing C emissions and modifying the optimum prescribed-burning interval for minimum C emission. This highlights the importance of studying site-specific biomass accumulation patterns with respect to environmental conditions for identifying suitable fire-rotation intervals to minimize C emissions.
The steady part of the secular variation of the Earth's magnetic field
NASA Technical Reports Server (NTRS)
Bloxham, Jeremy
1992-01-01
The secular variation of the Earth's magnetic field results from the effects of magnetic induction in the fluid outer core and from the effects of magnetic diffusion in the core and the mantle. Adequate observations to map the magnetic field at the core-mantle boundary extend back over three centuries, providing a model of the secular variation at the core-mantle boundary. Here we consider how best to analyze this time-dependent part of the field. To calculate steady core flow over long time periods, we introduce an adaptation of our earlier method of calculating the flow in order to achieve greater numerical stability. We perform this procedure for the periods 1840-1990 and 1690-1840 and find that well over 90 percent of the variance of the time-dependent field can be explained by simple steady core flow. The core flows obtained for the two intervals are broadly similar to each other and to flows determined over much shorter recent intervals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cattaneo, Richard; Hanna, Rabbie K.; Jacobsen, Gordon
Purpose: Adjuvant radiation therapy (RT) has been shown to improve local control in patients with endometrial carcinoma. We analyzed the impact of the time interval between hysterectomy and RT initiation in patients with endometrial carcinoma. Methods and Materials: In this institutional review board-approved study, we identified 308 patients with endometrial carcinoma who received adjuvant RT after hysterectomy. All patients had undergone hysterectomy, oophorectomy, and pelvic and para-aortic lymph node evaluation from 1988 to 2010. Patients' demographics, pathologic features, and treatments were compared. The time interval between hysterectomy and the start of RT was calculated. The effects of time interval onmore » recurrence-free (RFS), disease-specific (DSS), and overall survival (OS) were calculated. Following univariate analysis, multivariate modeling was performed. Results: The median age and follow-up for the study cohort was 65 years and 72 months, respectively. Eighty-five percent of the patients had endometrioid carcinoma. RT was delivered with high-dose-rate brachytherapy alone (29%), pelvic RT alone (20%), or both (51%). Median time interval to start RT was 42 days (range, 21-130 days). A total of 269 patients (74%) started their RT <9 weeks after undergoing hysterectomy (group 1) and 26% started ≥9 weeks after surgery (group 2). There were a total of 43 recurrences. Tumor recurrence was significantly associated with treatment delay of ≥9 weeks, with 5-year RFS of 90% for group 1 compared to only 39% for group 2 (P<.001). On multivariate analysis, RT delay of ≥9 weeks (P<.001), presence of lymphovascular space involvement (P=.001), and higher International Federation of Gynecology and Obstetrics grade (P=.012) were independent predictors of recurrence. In addition, RT delay of ≥9 weeks was an independent significant predictor for worse DSS and OS (P=.001 and P=.01, respectively). Conclusions: Delay in administering adjuvant RT after hysterectomy was associated with worse survival endpoints. Our data suggest that shorter time interval between hysterectomy and start of RT may be beneficial.« less
Perri, Amanda M.; O’Sullivan, Terri L.; Harding, John C.S.; Wood, R. Darren; Friendship, Robert M.
2017-01-01
The evaluation of pig hematology and biochemistry parameters is rarely done largely due to the costs associated with laboratory testing and labor, and the limited availability of reference intervals needed for interpretation. Within-herd and between-herd biological variation of these values also make it difficult to establish reference intervals. Regardless, baseline reference intervals are important to aid veterinarians in the interpretation of blood parameters for the diagnosis and treatment of diseased swine. The objective of this research was to provide reference intervals for hematology and biochemistry parameters of 3-week-old commercial nursing piglets in Ontario. A total of 1032 pigs lacking clinical signs of disease from 20 swine farms were sampled for hematology and iron panel evaluation, with biochemistry analysis performed on a subset of 189 randomly selected pigs. The 95% reference interval, mean, median, range, and 90% confidence intervals were calculated for each parameter. PMID:28373729
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Kim, Hae Jin; Song, Yong Ju; Kim, Young Kook; Jeoung, Jin Wook; Park, Ki Ho
2017-07-01
To evaluate functional progression in preperimetric glaucoma (PPG) with disc hemorrhage (DH) and to determine the time interval between the first-detected DH and development of glaucomatous visual field (VF) defect. A total of 87 patients who had been first diagnosed with PPG were enrolled. The medical records of PPG patients without DH (Group 1) and with DH (Group 2) were reviewed. When glaucomatous VF defect appeared, the time interval from the diagnosis of PPG to the development of VF defect was calculated and compared between the two groups. In group 2, the time intervals from the first-detected DH to VF defect of the single- and recurrent-DH were compared. Of the enrolled patients, 45 had DH in the preperimetric stage. The median time interval from the diagnosis of PPG to the development of VF defect was 73.3 months in Group 1, versus 45.4 months in Group 2 (P = 0.042). The cumulative probability of development of VF defect after diagnosis of PPG was significantly greater in Group 2 than in Group 1. The median time interval from first-detected DH to the development of VF defect was 37.8 months. The median time interval from DH to VF defect and cumulative probability of VF defect after DH did not show a statistical difference between single and recurrent-DH patients. The median time interval between the diagnosis of PPG and the development of VF defect was significantly shorter in PPG with DH. The VF defect appeared 37.8 months after the first-detected DH in PPG.
The Influence of Interpregnancy Interval on Infant Mortality
MCKINNEY, David; HOUSE, Melissa; CHEN, Aimin; MUGLIA, Louis; DEFRANCO, Emily
2017-01-01
Background In Ohio the infant mortality rate is above the national average and the black infant mortality rate is more than twice the white infant mortality rate. Having a short interpregnancy interval has been shown to correlate with preterm birth and low birth weight, but the effect of short interpregnancy interval on infant mortality is less well established. Objective To quantify the population impact of interpregnancy interval on the risk of infant mortality. Study Design This was a statewide population-based retrospective cohort study of all births (n=1,131,070) and infant mortalities (n=8,152) using linked Ohio birth and infant death records from 1/2007 through 9/2014. For this study we analyzed 5 interpregnancy interval categories: 0 to < 6 months, 6 to < 12 months, 12 to < 24 months, 24 to < 60 months, and ≥ 60 months. The primary outcome for this study was infant mortality. During the study period, 3701 infant mortalities were linked to a live birth certificate with an interpregnancy interval available. We calculated the frequency and relative risk (RR) of infant mortality for each interval compared to a referent interval of 12 to < 24 months. Stratified analyses by maternal race were also performed. Adjusted risks were estimated after accounting for statistically significant and biologically plausible confounding variables. Adjusted relative risk was utilized to calculate the attributable risk percent of short interpregnancy intervals on infant mortality. Results Short interpregnancy intervals were common in Ohio during the study period. 20.5% of all multiparous births followed an interval of < 12 months. The overall infant mortality rate during this time was 7.2 per 1000 live births (6.0 for white mothers and 13.1 for black mothers). Infant mortalities occurred more frequently for births that occurred following short intervals of 0 to < 6 months (9.2 per 1000) and 6 to < 12 months (7.1 per 1000) compared to 12 to < 24 months (5.6 per 1000), (p= <0.001 and <0.001). The highest risk for infant mortality followed interpregnancy intervals of 0 to < 6 months, adjRR 1.32 (95% CI 1.17–1.49) followed by interpregnancy intervals of 6 to < 12 months, adjRR 1.16 (95% CI 1.04–1.30). Analysis stratified by maternal race revealed similar findings. Attributable risk calculation showed that 24.2% of infant mortalities following intervals of 0 to < 6 months and 14.1% with intervals of 6 to < 12 months are attributable to the short interpregnancy interval. By avoiding short interpregnancy intervals of 12 months or less we estimate that in the state of Ohio 31 infant mortalities (20 white and 8 black) per year could have been prevented and the infant mortality rate could have been reduced from 7.2 to 7.0 during this time frame. Conclusion An interpregnancy interval of 12–60 months (1–5 years) between birth and conception of next pregnancy is associated with lowest risk of infant mortality. Public health initiatives and provider counseling to optimize birth spacing has the potential to significantly reduce infant mortality for both white and black mothers. PMID:28034653
NASA Astrophysics Data System (ADS)
Trifonenkov, A. V.; Trifonenkov, V. P.
2017-01-01
This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.
Incidence, prevalence, and hybrid approaches to calculating disability-adjusted life years
2012-01-01
When disability-adjusted life years are used to measure the burden of disease on a population in a time interval, they can be calculated in several different ways: from an incidence, pure prevalence, or hybrid perspective. I show that these calculation methods are not equivalent and discuss some of the formal difficulties each method faces. I show that if we don’t discount the value of future health, there is a sense in which the choice of calculation method is a mere question of accounting. Such questions can be important, but they don’t raise deep theoretical concerns. If we do discount, however, choice of calculation method can change the relative burden attributed to different conditions over time. I conclude by recommending that studies involving disability-adjusted life years be explicit in noting what calculation method is being employed and in explaining why that calculation method has been chosen. PMID:22967055
Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal.
Wang, Yuewen; Wang, Yahui; Akansu, Ali; Belfield, Kevin D; Hubbi, Basil; Liu, Xuan
2015-11-01
Speckle decorrelation analysis of optical coherence tomography (OCT) signal has been used in motion tracking. In our previous study, we demonstrated that cross-correlation coefficient (XCC) between Ascans had an explicit functional dependency on the magnitude of lateral displacement (δx). In this study, we evaluated the sensitivity of speckle motion tracking using the derivative of function XCC(δx) on variable δx. We demonstrated the magnitude of the derivative can be maximized. In other words, the sensitivity of OCT speckle tracking can be optimized by using signals with appropriate amount of decorrelation for XCC calculation. Based on this finding, we developed an adaptive speckle decorrelation analysis strategy to achieve motion tracking with optimized sensitivity. Briefly, we used subsequently acquired Ascans and Ascans obtained with larger time intervals to obtain multiple values of XCC and chose the XCC value that maximized motion tracking sensitivity for displacement calculation. Instantaneous motion speed can be calculated by dividing the obtained displacement with time interval between Ascans involved in XCC calculation. We implemented the above-described algorithm in real-time using graphic processing unit (GPU) and demonstrated its effectiveness in reconstructing distortion-free OCT images using data obtained from a manually scanned OCT probe. The adaptive speckle tracking method was validated in manually scanned OCT imaging, on phantom as well as in vivo skin tissue.
Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal
Wang, Yuewen; Wang, Yahui; Akansu, Ali; Belfield, Kevin D.; Hubbi, Basil; Liu, Xuan
2015-01-01
Speckle decorrelation analysis of optical coherence tomography (OCT) signal has been used in motion tracking. In our previous study, we demonstrated that cross-correlation coefficient (XCC) between Ascans had an explicit functional dependency on the magnitude of lateral displacement (δx). In this study, we evaluated the sensitivity of speckle motion tracking using the derivative of function XCC(δx) on variable δx. We demonstrated the magnitude of the derivative can be maximized. In other words, the sensitivity of OCT speckle tracking can be optimized by using signals with appropriate amount of decorrelation for XCC calculation. Based on this finding, we developed an adaptive speckle decorrelation analysis strategy to achieve motion tracking with optimized sensitivity. Briefly, we used subsequently acquired Ascans and Ascans obtained with larger time intervals to obtain multiple values of XCC and chose the XCC value that maximized motion tracking sensitivity for displacement calculation. Instantaneous motion speed can be calculated by dividing the obtained displacement with time interval between Ascans involved in XCC calculation. We implemented the above-described algorithm in real-time using graphic processing unit (GPU) and demonstrated its effectiveness in reconstructing distortion-free OCT images using data obtained from a manually scanned OCT probe. The adaptive speckle tracking method was validated in manually scanned OCT imaging, on phantom as well as in vivo skin tissue. PMID:26600996
Laser detection of material thickness
Early, James W.
2002-01-01
There is provided a method for measuring material thickness comprising: (a) contacting a surface of a material to be measured with a high intensity short duration laser pulse at a light wavelength which heats the area of contact with the material, thereby creating an acoustical pulse within the material: (b) timing the intervals between deflections in the contacted surface caused by the reverberation of acoustical pulses between the contacted surface and the opposite surface of the material: and (c) determining the thickness of the material by calculating the proportion of the thickness of the material to the measured time intervals between deflections of the contacted surface.
Method for determining formation quality factor from seismic data
Taner, M. Turhan; Treitel, Sven
2005-08-16
A method is disclosed for calculating the quality factor Q from a seismic data trace. The method includes calculating a first and a second minimum phase inverse wavelet at a first and a second time interval along the seismic data trace, synthetically dividing the first wavelet by the second wavelet, Fourier transforming the result of the synthetic division, calculating the logarithm of this quotient of Fourier transforms and determining the slope of a best fit line to the logarithm of the quotient.
NASA Astrophysics Data System (ADS)
Velinov, Peter; Asenovski, Simeon; Mateev, Lachezar
2013-04-01
Numerical calculations of galactic cosmic ray (GCR) ionization rate profiles are presented for the middle atmosphere and lower ionosphere altitudes (35-90 km) for the full GCR composition (protons, alpha particles, and groups of heavier nuclei: light L, medium M, heavy H, very heavy VH). This investigation is based on a model developed by Velinov et al. (1974) and Velinov and Mateev (2008), which is further improved in the present paper. Analytical expressions for energy interval contributions are provided. An approximation of the ionization function on three energy intervals is used and for the first time the charge decrease interval for electron capturing (Dorman 2004) is investigated quantitatively. Development in this field of research is important for better understanding the impact of space weather on the atmosphere. GCRs influence the ionization and electric parameters in the atmosphere and also the chemical processes (ozone creation and depletion in the stratosphere) in it. The model results show good agreement with experimental data (Brasseur and Solomon 1986, Rosenberg and Lanzerotti 1979, Van Allen 1952).
Bester, Rachelle; Jooste, Anna E C; Maree, Hans J; Burger, Johan T
2012-09-27
Grapevine leafroll-associated virus 3 (GLRaV-3) is the main contributing agent of leafroll disease worldwide. Four of the six GLRaV-3 variant groups known have been found in South Africa, but their individual contribution to leafroll disease is unknown. In order to study the pathogenesis of leafroll disease, a sensitive and accurate diagnostic assay is required that can detect different variant groups of GLRaV-3. In this study, a one-step real-time RT-PCR, followed by high-resolution melting (HRM) curve analysis for the simultaneous detection and identification of GLRaV-3 variants of groups I, II, III and VI, was developed. A melting point confidence interval for each variant group was calculated to include at least 90% of all melting points observed. A multiplex RT-PCR protocol was developed to these four variant groups in order to assess the efficacy of the real-time RT-PCR HRM assay. A universal primer set for GLRaV-3 targeting the heat shock protein 70 homologue (Hsp70h) gene of GLRaV-3 was designed that is able to detect GLRaV-3 variant groups I, II, III and VI and differentiate between them with high-resolution melting curve analysis. The real-time RT-PCR HRM and the multiplex RT-PCR were optimized using 121 GLRaV-3 positive samples. Due to a considerable variation in melting profile observed within each GLRaV-3 group, a confidence interval of above 90% was calculated for each variant group, based on the range and distribution of melting points. The intervals of groups I and II could not be distinguished and a 95% joint confidence interval was calculated for simultaneous detection of group I and II variants. An additional primer pair targeting GLRaV-3 ORF1a was developed that can be used in a subsequent real-time RT-PCR HRM to differentiate between variants of groups I and II. Additionally, the multiplex RT-PCR successfully validated 94.64% of the infections detected with the real-time RT-PCR HRM. The real-time RT-PCR HRM provides a sensitive, automated and rapid tool to detect and differentiate different variant groups in order to study the epidemiology of leafroll disease.
GPS constraints on M 7-8 earthquake recurrence times for the New Madrid seismic zone
Stuart, W.D.
2001-01-01
Newman et al. (1999) estimate the time interval between the 1811-1812 earthquake sequence near New Madrid, Missouri and a future similar sequence to be at least 2,500 years, an interval significantly longer than other recently published estimates. To calculate the recurrence time, they assume that slip on a vertical half-plane at depth contributes to the current interseismic motion of GPS benchmarks. Compared to other plausible fault models, the half-plane model gives nearly the maximum rate of ground motion for the same interseismic slip rate. Alternative models with smaller interseismic fault slip area can satisfy the present GPS data by having higher slip rate and thus can have earthquake recurrence times much less than 2,500 years.
An algorithm of Saxena-Easo on fuzzy time series forecasting
NASA Astrophysics Data System (ADS)
Ramadhani, L. C.; Anggraeni, D.; Kamsyakawuni, A.; Hadi, A. F.
2018-04-01
This paper presents a forecast model of Saxena-Easo fuzzy time series prediction to study the prediction of Indonesia inflation rate in 1970-2016. We use MATLAB software to compute this method. The algorithm of Saxena-Easo fuzzy time series doesn’t need stationarity like conventional forecasting method, capable of dealing with the value of time series which are linguistic and has the advantage of reducing the calculation, time and simplifying the calculation process. Generally it’s focus on percentage change as the universe discourse, interval partition and defuzzification. The result indicate that between the actual data and the forecast data are close enough with Root Mean Square Error (RMSE) = 1.5289.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sparks, R.B.; Stabin, M.G.
1999-01-01
After administration of I-131 to the female patient, the possibility of radiation exposure of the embryo/fetus exists if the patient becomes pregnant while radioiodine remains in the body. Fetal radiation dose estimates for such cases were calculated. Doses were calculated for various maternal thyroid uptakes and time intervals between administration and conception, including euthyroid and hyperthyroid cases. The maximum fetal dose calculating was about 9.8E-03 mGy/MBq, which occurred with 100% maternal thyroid uptake and a 1 week interval between administration and conception. Placental crossover of the small amount of radioiodine remaining 90 days after conception was also considered. Such crossovermore » could result in an additional fetal dose of 9.8E-05 mGy/MBq and a maximum fetal thyroid self dose of 3.5E-04 mGy/MBq.« less
Santana, Victor M.; Alday, Josu G.; Lee, HyoHyeMi; Allen, Katherine A.; Marrs, Rob H.
2016-01-01
A present challenge in fire ecology is to optimize management techniques so that ecological services are maximized and C emissions minimized. Here, we modeled the effects of different prescribed-burning rotation intervals and wildfires on carbon emissions (present and future) in British moorlands. Biomass-accumulation curves from four Calluna-dominated ecosystems along a north-south gradient in Great Britain were calculated and used within a matrix-model based on Markov Chains to calculate above-ground biomass-loads and annual C emissions under different prescribed-burning rotation intervals. Additionally, we assessed the interaction of these parameters with a decreasing wildfire return intervals. We observed that litter accumulation patterns varied between sites. Northern sites (colder and wetter) accumulated lower amounts of litter with time than southern sites (hotter and drier). The accumulation patterns of the living vegetation dominated by Calluna were determined by site-specific conditions. The optimal prescribed-burning rotation interval for minimizing annual carbon emissions also differed between sites: the optimal rotation interval for northern sites was between 30 and 50 years, whereas for southern sites a hump-backed relationship was found with the optimal interval either between 8 to 10 years or between 30 to 50 years. Increasing wildfire frequency interacted with prescribed-burning rotation intervals by both increasing C emissions and modifying the optimum prescribed-burning interval for minimum C emission. This highlights the importance of studying site-specific biomass accumulation patterns with respect to environmental conditions for identifying suitable fire-rotation intervals to minimize C emissions. PMID:27880840
Angelova, Silvija; Ribagin, Simeon; Raikova, Rositsa; Veneva, Ivanka
2018-02-01
After a stroke, motor units stop working properly and large, fast-twitch units are more frequently affected. Their impaired functions can be investigated during dynamic tasks using electromyographic (EMG) signal analysis. The aim of this paper is to investigate changes in the parameters of the power/frequency function during elbow flexion between affected, non-affected, and healthy muscles. Fifteen healthy subjects and ten stroke survivors participated in the experiments. Electromyographic data from 6 muscles of the upper limbs during elbow flexion were filtered and normalized to the amplitudes of EMG signals during maximal isometric tasks. The moments when motion started and when the flexion angle reached its maximal value were found. Equal intervals of 0.3407 s were defined between these two moments and one additional interval before the start of the flexion (first one) was supplemented. For each of these intervals the power/frequency function of EMG signals was calculated. The mean (MNF) and median frequencies (MDF), the maximal power (MPw) and the area under the power function (APw) were calculated. MNF was always higher than MDF. A significant decrease in these frequencies was found in only three post-stroke survivors. The frequencies in the first time interval were nearly always the highest among all intervals. The maximal power was nearly zero during first time interval and increased during the next ones. The largest values of MPw and APw were found for the flexor muscles and they increased for the muscles of the affected arm compared to the non-affected one of stroke survivors. Copyright © 2017 Elsevier Ltd. All rights reserved.
Black hole collapse in the 1 /c expansion
NASA Astrophysics Data System (ADS)
Anous, Tarek; Hartman, Thomas; Rovai, Antonin; Sonner, Julian
2016-07-01
We present a first-principles CFT calculation corresponding to the spherical collapse of a shell of matter in three dimensional quantum gravity. In field theory terms, we describe the equilibration process, from early times to thermalization, of a CFT following a sudden injection of energy at time t = 0. By formulating a continuum version of Zamolodchikov's monodromy method to calculate conformal blocks at large central charge c, we give a framework to compute a general class of probe observables in the collapse state, incorporating the full backreaction of matter fields on the dual geometry. This is illustrated by calculating a scalar field two-point function at time-like separation and the time-dependent entanglement entropy of an interval, both showing thermalization at late times. The results are in perfect agreement with previous gravity calculations in the AdS3-Vaidya geometry. Information loss appears in the CFT as an explicit violation of unitarity in the 1 /c expansion, restored by nonperturbative corrections.
Black hole collapse in the 1/c expansion
Anous, Tarek; Hartman, Thomas; Rovai, Antonin; ...
2016-07-25
We present a first-principles CFT calculation corresponding to the spherical collapse of a shell of matter in three dimensional quantum gravity. In field theory terms, we describe the equilibration process, from early times to thermalization, of a CFT following a sudden injection of energy at time t = 0. By formulating a continuum version of Zamolodchikov’s monodromy method to calculate conformal blocks at large central charge c, we give a framework to compute a general class of probe observables in the collapse state, incorporating the full backreaction of matter fields on the dual geometry. This is illustrated by calculating amore » scalar field two-point function at time-like separation and the time-dependent entanglement entropy of an interval, both showing thermalization at late times. Furthermore, the results are in perfect agreement with previous gravity calculations in the AdS 3-Vaidya geometry. Information loss appears in the CFT as an explicit violation of unitarity in the 1/c expansion, restored by nonperturbative corrections.« less
Association between the physical activity and heart rate corrected-QT interval in older adults.
Michishita, Ryoma; Fukae, Chika; Mihara, Rikako; Ikenaga, Masahiro; Morimura, Kazuhiro; Takeda, Noriko; Yamada, Yosuke; Higaki, Yasuki; Tanaka, Hiroaki; Kiyonaga, Akira
2015-07-01
Increased physical activity can reduce the incidence of cardiovascular disease and the mortality rate. In contrast, a prolonged heart rate corrected-QT (QTc) interval is associated with an increased risk of arrhythmias, sudden cardiac death and coronary artery disease. The present cross-sectional study was designed to clarify the association between the physical activity level and the QTc interval in older adults. The participants included 586 older adults (267 men and 319 women, age 71.2 ± 4.7 years) without a history of cardiovascular disease, who were taking cardioactive drugs. Electrocardiography was recorded with a standard resting 12-lead electrocardiograph, while the QTc interval was calculated according to Hodges' formula. The physical activity level was assessed using a triaxial accelerometer. The participants were divided into four categories, which were defined equally quartile distributions of the QTc interval. After adjusting for age, body mass index, waist circumference and the number of steps, the time spent in inactivity was higher and the time spent in light physical activity was significantly lower in the longest QTc interval group than in the shortest QTc interval group in both sexes (P < 0.05, respectively). However, there were no significant differences in the time spent in moderate and vigorous physical activities among the four groups in either sex. These results suggest that a decreased physical activity level, especially inactivity and light intensity physical activity, were associated with QTc interval in older adults. © 2014 Japan Geriatrics Society.
Melkonian, D; Korner, A; Meares, R; Bahramali, H
2012-10-01
A novel method of the time-frequency analysis of non-stationary heart rate variability (HRV) is developed which introduces the fragmentary spectrum as a measure that brings together the frequency content, timing and duration of HRV segments. The fragmentary spectrum is calculated by the similar basis function algorithm. This numerical tool of the time to frequency and frequency to time Fourier transformations accepts both uniform and non-uniform sampling intervals, and is applicable to signal segments of arbitrary length. Once the fragmentary spectrum is calculated, the inverse transform recovers the original signal and reveals accuracy of spectral estimates. Numerical experiments show that discontinuities at the boundaries of the succession of inter-beat intervals can cause unacceptable distortions of the spectral estimates. We have developed a measure that we call the "RR deltagram" as a form of the HRV data that minimises spectral errors. The analysis of the experimental HRV data from real-life and controlled breathing conditions suggests transient oscillatory components as functionally meaningful elements of highly complex and irregular patterns of HRV. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
The orbital evolution of the AMOR asteroidal group during 11,550 years
NASA Astrophysics Data System (ADS)
Babadzhanov, P. B.; Zausaev, A. F.; Pushkaryov, A. N.
The orbital evolution of twenty seven Amor asteroids was determined by the Everhart method for the time interval from 2250 AD to 9300 BC. Closest encounters with terrestrial planets are calculated in the evolution process. Stable resonances with Venus, Earth and Jupiter over the period from 2250 AD to 9300 BC have been obtained. Theoretical coordinates of radiants on initial and final moments of integrating were calculated.
Anaerobic work calculated in cycling time trials of different length.
Mulder, Roy C; Noordhof, Dionne A; Malterer, Katherine R; Foster, Carl; de Koning, Jos J
2015-03-01
Previous research showed that gross efficiency (GE) declines during exercise and therefore influences the expenditure of anaerobic and aerobic resources. To calculate the anaerobic work produced during cycling time trials of different length, with and without a GE correction. Anaerobic work was calculated in 18 trained competitive cyclists during 4 time trials (500, 1000, 2000, and 4000-m). Two additional time trials (1000 and 4000 m) that were stopped at 50% of the corresponding "full" time trial were performed to study the rate of the decline in GE. Correcting for a declining GE during time-trial exercise resulted in a significant (P<.001) increase in anaerobically attributable work of 30%, with a 95% confidence interval of [25%, 36%]. A significant interaction effect between calculation method (constant GE, declining GE) and distance (500, 1000, 2000, 4000 m) was found (P<.001). Further analysis revealed that the constant-GE calculation method was different from the declining method for all distances and that anaerobic work calculated assuming a constant GE did not result in equal values for anaerobic work calculated over different time-trial distances (P<.001). However, correcting for a declining GE resulted in a constant value for anaerobically attributable work (P=.18). Anaerobic work calculated during short time trials (<4000 m) with a correction for a declining GE is increased by 30% [25%, 36%] and may represent anaerobic energy contributions during high-intensity exercise better than calculating anaerobic work assuming a constant GE.
Swain, Eric D.; Wexler, Eliezer J.
1996-01-01
Ground-water and surface-water flow models traditionally have been developed separately, with interaction between subsurface flow and streamflow either not simulated at all or accounted for by simple formulations. In areas with dynamic and hydraulically well-connected ground-water and surface-water systems, stream-aquifer interaction should be simulated using deterministic responses of both systems coupled at the stream-aquifer interface. Accordingly, a new coupled ground-water and surface-water model was developed by combining the U.S. Geological Survey models MODFLOW and BRANCH; the interfacing code is referred to as MODBRANCH. MODFLOW is the widely used modular three-dimensional, finite-difference ground-water model, and BRANCH is a one-dimensional numerical model commonly used to simulate unsteady flow in open- channel networks. MODFLOW was originally written with the River package, which calculates leakage between the aquifer and stream, assuming that the stream's stage remains constant during one model stress period. A simple streamflow routing model has been added to MODFLOW, but is limited to steady flow in rectangular, prismatic channels. To overcome these limitations, the BRANCH model, which simulates unsteady, nonuniform flow by solving the St. Venant equations, was restructured and incorporated into MODFLOW. Terms that describe leakage between stream and aquifer as a function of streambed conductance and differences in aquifer and stream stage were added to the continuity equation in BRANCH. Thus, leakage between the aquifer and stream can be calculated separately in each model, or leakages calculated in BRANCH can be used in MODFLOW. Total mass in the coupled models is accounted for and conserved. The BRANCH model calculates new stream stages for each time interval in a transient simulation based on upstream boundary conditions, stream properties, and initial estimates of aquifer heads. Next, aquifer heads are calculated in MODFLOW based on stream stages calculated by BRANCH, aquifer properties, and stresses. This process is repeated until convergence criteria are met for head and stage. Because time steps used in ground-water modeling can be much longer than time intervals used in surface- water simulations, provision has been made for handling multiple BRANCH time intervals within one MODFLOW time step. An option was also added to BRANCH to allow the simulation of channel drying and rewetting. Testing of the coupled model was verified by using data from previous studies; by comparing results with output from a simpler, four-point implicit, open-channel flow model linked with MODFLOW; and by comparison to field studies of L-31N canal in southern Florida.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, R; Meyer, J; Horwitz, E
Purpose: Medical advances have resulted in cancer patients living longer as evidenced by the number of patients seen for possible re-irradiation. Original normal tissue dose volume constraints remain in the re-irradiation setting to minimize normal tissue toxicity. This work correlates estimates of equivalent dose and repair with sequelae. Methods: CNS and GI tract re-irradiation patient follow-up records (including imaging studies) were reviewed with side effects correlated with the calculated EQD2 and repair estimates. Results: Follow-up records for 16 re-irradiation patients with potential overlap to the spinal cord were analyzed. The mean time interval between 1st and last courses was 76.6more » months. Three patients underwent a 3rd course of radiotherapy with a mean time interval between 2nd and final courses of 19.7 months. The mean values for assumed repair were 18.8% and 8.3%, respectively. The calculated total EQD2 doses were 48.09Gy and 50.98Gy with and without repair. At a mean follow-up time of 5.0 months, 6 patients were deceased and no records indicate radiation related neurological deficits. The records for 11 patients with potential overlap to the bowel were also analyzed. The mean time interval between 1st and last courses was 105.9 months. The mean value for assumed repair was 15.9%. The calculated total EQD2 doses were 64.96Gy and 70.80Gy with and without repair. At a mean follow-up time of 4.9 months, 6 patients were deceased, one having a potential enteric fistulization of the bladder. Clinical review of the case determined that the fistula was caused by tumor progression and not a side effect of radiotherapy treatments. Conclusion: Application of the EQD2 method in the re-irradiation setting using conservative estimates of repair is presented. Adhering to accepted dose volume limits following this application is demonstrated to be safe through empirical records as limited by this small patient cohort and short follow-up.« less
Parsons, Tom
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Parsons, T.
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Harmonising Reference Intervals for Three Calculated Parameters used in Clinical Chemistry.
Hughes, David; Koerbin, Gus; Potter, Julia M; Glasgow, Nicholas; West, Nic; Abhayaratna, Walter P; Cavanaugh, Juleen; Armbruster, David; Hickman, Peter E
2016-08-01
For more than a decade there has been a global effort to harmonise all phases of the testing process, with particular emphasis on the most frequently utilised measurands. In addition, it is recognised that calculated parameters derived from these measurands should also be a target for harmonisation. Using data from the Aussie Normals study we report reference intervals for three calculated parameters: serum osmolality, serum anion gap and albumin-adjusted serum calcium. The Aussie Normals study was an a priori study that analysed samples from 1856 healthy volunteers. The nine analytes used for the calculations in this study were measured on Abbott Architect analysers. The data demonstrated normal (Gaussian) distributions for the albumin-adjusted serum calcium, the anion gap (using potassium in the calculation) and the calculated serum osmolality (using both the Bhagat et al. and Smithline and Gardner formulae). To assess the suitability of these reference intervals for use as harmonised reference intervals, we reviewed data from the Royal College of Pathologists of Australasia/Australasian Association of Clinical Biochemists (RCPA/AACB) bias survey. We conclude that the reference intervals for the calculated serum osmolality (using the Smithline and Gardner formulae) may be suitable for use as a common reference interval. Although a common reference interval for albumin-adjusted serum calcium may be possible, further investigations (including a greater range of albumin concentrations) are needed. This is due to the bias between the Bromocresol Green (BCG) and Bromocresol Purple (BCP) methods at lower serum albumin concentrations. Problems with the measurement of Total CO 2 in the bias survey meant that we could not use the data for assessing the suitability of a common reference interval for the anion gap. Further study is required.
ERIC Educational Resources Information Center
Levin, Sidney
1984-01-01
Presents the listing (TRS-80) for a computer program which derives the relativistic equation (employing as a model the concept of a moving clock which emits photons at regular intervals) and calculates transformations of time, mass, and length with increasing velocities (Einstein-Lorentz transformations). (JN)
Dexter, Franklin; Epstein, Richard H
2013-12-01
Prolonged time to extubation has been defined as the occurrence of a ≥ 15-minute interval from the end of surgery to removal of the tracheal tube. We quantified the increases in the mean times from end of surgery to exit from the OR associated with prolonged extubations and tested whether the increases were economically important (≥ 5 minutes). Anesthesia information management system data from 1 tertiary hospital were collected from November 2005 through December 2012 (i.e., sample sizes were N = 22 sequential quarters). Cases were excluded in which the patient's trachea was not intubated or extubated while physically in the operating room (OR). For each combination of stratification variable (below) and quarter, the mean time from end of surgery to OR exit was calculated for the extubations that were not prolonged and for those that were prolonged. Results are reported as mean ± SEM, with "at least" denoting the lower 95% confidence interval. The mean times from end of surgery to OR exit were at least 12.6 minutes longer for prolonged extubations when calculated with stratification by duration of surgery and prone or other positioning (13.0 ± 0.1 minutes), P < 0.0001 compared to 5 minutes (i.e., times were substantively long economically). The mean times were at least 11.7 minutes longer when calculated stratified by anesthesia procedure code (12.4 ± 0.4, P < 0.0001) and at least 11.3 minutes longer when calculated stratified by surgeon (12.4 ± 0.6, P < 0.0001). We recommend that anesthesia providers document the times of extubations and monitor the incidence of prolonged extubations as an economic measure. This would be especially important for providers at facilities with many ORs that have at least 8 hours of cases and turnovers.
Evaluating the influential priority of the factors on insurance loss of public transit
Su, Yongmin; Chen, Xinqiang
2018-01-01
Understanding correlation between influential factors and insurance losses is beneficial for insurers to accurately price and modify the bonus-malus system. Although there have been a certain number of achievements in insurance losses and claims modeling, limited efforts focus on exploring the relative role of accidents characteristics in insurance losses. The primary objective of this study is to evaluate the influential priority of transit accidents attributes, such as the time, location and type of accidents. Based on the dataset from Washington State Transit Insurance Pool (WSTIP) in USA, we implement several key algorithms to achieve the objectives. First, K-means algorithm contributes to cluster the insurance loss data into 6 intervals; second, Grey Relational Analysis (GCA) model is applied to calculate grey relational grades of the influential factors in each interval; in addition, we implement Naive Bayes model to compute the posterior probability of factors values falling in each interval. The results show that the time, location and type of accidents significantly influence the insurance loss in the first five intervals, but their grey relational grades show no significantly difference. In the last interval which represents the highest insurance loss, the grey relational grade of the time is significant higher than that of the location and type of accidents. For each value of the time and location, the insurance loss most likely falls in the first and second intervals which refers to the lower loss. However, for accidents between buses and non-motorized road users, the probability of insurance loss falling in the interval 6 tends to be highest. PMID:29298337
Evaluating the influential priority of the factors on insurance loss of public transit.
Zhang, Wenhui; Su, Yongmin; Ke, Ruimin; Chen, Xinqiang
2018-01-01
Understanding correlation between influential factors and insurance losses is beneficial for insurers to accurately price and modify the bonus-malus system. Although there have been a certain number of achievements in insurance losses and claims modeling, limited efforts focus on exploring the relative role of accidents characteristics in insurance losses. The primary objective of this study is to evaluate the influential priority of transit accidents attributes, such as the time, location and type of accidents. Based on the dataset from Washington State Transit Insurance Pool (WSTIP) in USA, we implement several key algorithms to achieve the objectives. First, K-means algorithm contributes to cluster the insurance loss data into 6 intervals; second, Grey Relational Analysis (GCA) model is applied to calculate grey relational grades of the influential factors in each interval; in addition, we implement Naive Bayes model to compute the posterior probability of factors values falling in each interval. The results show that the time, location and type of accidents significantly influence the insurance loss in the first five intervals, but their grey relational grades show no significantly difference. In the last interval which represents the highest insurance loss, the grey relational grade of the time is significant higher than that of the location and type of accidents. For each value of the time and location, the insurance loss most likely falls in the first and second intervals which refers to the lower loss. However, for accidents between buses and non-motorized road users, the probability of insurance loss falling in the interval 6 tends to be highest.
Development of Automatic Control of Bayer Plant Digestion
NASA Astrophysics Data System (ADS)
Riffaud, J. P.
Supervisory computer control has been achieved in Alcan's Bayer Plants at Arvida, Quebec, Canada. The purpose of the automatic control system is to stabilize and consequently increase, the alumina/caustic ratio within the digester train and in the blow-off liquor. Measurements of the electrical conductivity of the liquor are obtained from electrodeless conductivity meters. These signals, along with several others are scanned by the computer and converted to engineering units, using specific relationships which are updated periodically for calibration purposes. On regular time intervals, values of ratio are compared to target values and adjustments are made to the bauxite flow entering the digesters. Dead time compensation included in the control algorithm enables a faster rate for corrections. Modification of production rate is achieved through careful timing of various flow changes. Calibration of the conductivity meters is achieved by sampling at intervals the liquor flowing through them, and analysing it with a thermometric titrator. Calibration of the thermometric titrator is done at intervals with a standard solution. Calculations for both calibrations are performed by computer from data entered by the analyst. The computer was used for on-line data collection, modelling of the digester system, calculation of disturbances and simulation of control strategies before implementing the most successful strategy in the Plant. Control of ratio has been improved by the integrated system, resulting in increased Plant productivity.
Human-in-the-Loop Assessment of Alternative Clearances in Interval Management Arrival Operations
NASA Technical Reports Server (NTRS)
Baxley, Brian T.; Wilson, Sara R.; Swieringa, Kurt A.; Johnson, William C.; Roper, Roy D.; Hubbs, Clay E.; Goess, Paul A.; Shay, Richard F.
2016-01-01
Interval Management Alternative Clearances (IMAC) was a human-in-the-loop simulation experiment conducted to explore the Air Traffic Management (ATM) Technology Demonstration (ATD-1) Concept of Operations (ConOps), which combines advanced arrival scheduling, controller decision support tools, and aircraft avionics to enable multiple time deconflicted, efficient arrival streams into a high-density terminal airspace. Interval Management (IM) is designed to support the ATD-1 concept by having an "Ownship" (IM-capable) aircraft achieve or maintain a specific time or distance behind a "Target" (preceding) aircraft. The IM software uses IM clearance information and the Ownship data (route of flight, current location, and wind) entered by the flight crew, and the Target aircraft's Automatic Dependent Surveillance-Broadcast state data, to calculate the airspeed necessary for the IM-equipped aircraft to achieve or maintain the assigned spacing goal.
NASA Astrophysics Data System (ADS)
Liu, Pengbo; Mongelli, Max; Mondry, Adrian
2004-07-01
The purpose of this study is to verify by Receiver Operating Characteristics (ROC) a mathematical model supporting the hypothesis that IUGR can be diagnosed by estimating growth velocity. The ROC compare computerized simulation results with clinical data from 325 pregnant British women. Each patient had 6 consecutive ultrasound examinations for fetal abdominal circumference (fac). Customized and un-customized fetal weights were calculated according to Hadlock"s formula. IUGR was diagnosed by the clinical standard, i.e. estimated weight below the tenth percentile. Growth velocity was estimated by calculating the changes of fac (Dzfac/dt) at various time intervals from 3 to 10 weeks. Finally, ROC was used to compare the methods. At 3~4 weeks scan interval, the area under the ROC curve is 0.68 for customized data and 0.66 for the uncustomized data with 95% confidence interval. Comparison between simulation data and real pregnancies verified that the model is clinically acceptable.
Calculation of power spectrums from digital time series with missing data points
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.
1980-01-01
Two algorithms are developed for calculating power spectrums from the autocorrelation function when there are missing data points in the time series. Both methods use an average sampling interval to compute lagged products. One method, the correlation function power spectrum, takes the discrete Fourier transform of the lagged products directly to obtain the spectrum, while the other, the modified Blackman-Tukey power spectrum, takes the Fourier transform of the mean lagged products. Both techniques require fewer calculations than other procedures since only 50% to 80% of the maximum lags need be calculated. The algorithms are compared with the Fourier transform power spectrum and two least squares procedures (all for an arbitrary data spacing). Examples are given showing recovery of frequency components from simulated periodic data where portions of the time series are missing and random noise has been added to both the time points and to values of the function. In addition the methods are compared using real data. All procedures performed equally well in detecting periodicities in the data.
2013-07-01
structure of the data and Gower’s similarity coefficient as the algorithm for calculating the proximity matrices. The following section provides a...representative set of terrorist event data. Attribute Day Location Time Prim /Attack Sec/Attack Weight 1 1 1 1 1 Scale Nominal Nominal Interval Nominal...calculate the similarity it uses Gower’s similarity and multidimensional scaling algorithms contained in an R statistical computing environment
The orbital evolution of the Apollo asteroid group over 11,550 years
NASA Astrophysics Data System (ADS)
Zausaev, A. F.; Pushkarev, A. N.
1992-08-01
The Everhard method was used to monitor the orbital evolution of 20 Apollo asteroids in the time interval from 2250 A.D. to 9300 B.C. The closest encounters with large planets in the evolution process are calculated. Stable resonances with Venus and Earth over the period from 2250 A.D. to 9300 B.C. are obtained. Theoretical coordinates of radiants on initial and final moments of integration are calculated.
Park, Myung Sook; Kang, Kyung Ja; Jang, Sun Joo; Lee, Joo Yun; Chang, Sun Ju
2018-03-01
This study aimed to evaluate the components of test-retest reliability including time interval, sample size, and statistical methods used in patient-reported outcome measures in older people and to provide suggestions on the methodology for calculating test-retest reliability for patient-reported outcomes in older people. This was a systematic literature review. MEDLINE, Embase, CINAHL, and PsycINFO were searched from January 1, 2000 to August 10, 2017 by an information specialist. This systematic review was guided by both the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and the guideline for systematic review published by the National Evidence-based Healthcare Collaborating Agency in Korea. The methodological quality was assessed by the Consensus-based Standards for the selection of health Measurement Instruments checklist box B. Ninety-five out of 12,641 studies were selected for the analysis. The median time interval for test-retest reliability was 14days, and the ratio of sample size for test-retest reliability to the number of items in each measure ranged from 1:1 to 1:4. The most frequently used statistical methods for continuous scores was intraclass correlation coefficients (ICCs). Among the 63 studies that used ICCs, 21 studies presented models for ICC calculations and 30 studies reported 95% confidence intervals of the ICCs. Additional analyses using 17 studies that reported a strong ICC (>0.09) showed that the mean time interval was 12.88days and the mean ratio of the number of items to sample size was 1:5.37. When researchers plan to assess the test-retest reliability of patient-reported outcome measures for older people, they need to consider an adequate time interval of approximately 13days and the sample size of about 5 times the number of items. Particularly, statistical methods should not only be selected based on the types of scores of the patient-reported outcome measures, but should also be described clearly in the studies that report the results of test-retest reliability. Copyright © 2017 Elsevier Ltd. All rights reserved.
High resolution in situ ultrasonic corrosion monitor
Grossman, R.J.
1984-01-10
An ultrasonic corrosion monitor is provided which produces an in situ measurement of the amount of corrosion of a monitoring zone or zones of an elongate probe placed in the corrosive environment. A monitoring zone is preferably formed between the end of the probe and the junction of the zone with a lead-in portion of the probe. Ultrasonic pulses are applied to the probe and a determination made of the time interval between pulses reflected from the end of the probe and the junction referred to, both when the probe is uncorroded and while it is corroding. Corresponding electrical signals are produced and a value for the normalized transit time delay derived from these time interval measurements is used to calculate the amount of corrosion.
High resolution in situ ultrasonic corrosion monitor
Grossman, Robert J.
1985-01-01
An ultrasonic corrosion monitor is provided which produces an in situ measurement of the amount of corrosion of a monitoring zone or zones of an elongate probe placed in the corrosive environment. A monitoring zone is preferably formed between the end of the probe and the junction of the zone with a lead-in portion of the probe. Ultrasonic pulses are applied to the probe and a determination made of the time interval between pulses reflected from the end of the probe and the junction referred to, both when the probe is uncorroded and while it is corroding. Corresponding electrical signals are produced and a value for the normalized transit time delay derived from these time interval measurements is used to calculate the amount of corrosion.
Stevens, Ken
1984-01-01
Mobil Oil Corporation personnel have designated at least four sandstone intervals, A-D (top to bottom), on the single-point resistivity logs of wells drilled in the South Trend Development Area. This report presents time-drawdown data reported by Mobil Oil Corporation from singly (A or B or C or D sandstone interval) and multiply (A, B, C, and D sandstone Intervals) completed wells for the August 16-17, 1982 aquifer test at the South Trend Development Area Site 1. This report also describes the results of flowmeter and brine-injection tests by the U.S. Geological Survey in monitoring well 16P52. Well 16P52 is open to sandstone intervals A, B, C, and D. On July 26, 1982, water was injected at a rate of 1.43 cubic feet per minute above the A sandstone interval in well 16P52. Based on flowmeter data, the calculated rates of flow were 1.23 cubic feet per minute between the A and B sandstone intervals, 0.63 cubic foot per minute between the B and C sandstone intervals, and less than 0.17 cubic foot per minute between the C and D sandstone intervals. Based upon brine-slug-injection tests conducted during August 1982, the calculated flow rates between sandstone intervals A and B are as follows: 0.01 cubic foot per minute upward flow (B to A) about 5 hours after pumping began for the aquifer test; 0.004 cubic foot per minute upward flow (B to A) about 21 hours after pumping began; and 0.0 cubic foot per minute about 46 hours after the pump was turned off. All other brine-slug-injection tests measured no flow.
Cross-sample entropy of foreign exchange time series
NASA Astrophysics Data System (ADS)
Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao
2010-11-01
The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.
NASA Technical Reports Server (NTRS)
Berman, A. L.
1977-01-01
Observations of Viking differenced S-band/X-band (S-X) range are shown to correlate strongly with Viking Doppler noise. A ratio of proportionality between downlink S-band plasma-induced range error and two-way Doppler noise is calculated. A new parameter (similar to the parameter epsilon which defines the ratio of local electron density fluctuations to mean electron density) is defined as a function of observed data sample interval (Tau) where the time-scale of the observations is 15 Tau. This parameter is interpreted to yield the ratio of net observed phase (or electron density) fluctuations to integrated electron density (in RMS meters/meter). Using this parameter and the thin phase-changing screen approximation, a value for the scale size L is calculated. To be consistent with Doppler noise observations, it is seen necessary for L to be proportional to closest approach distance a, and a strong function of the observed data sample interval, and hence the time-scale of the observations.
Evaluation of confidence intervals for a steady-state leaky aquifer model
Christensen, S.; Cooley, R.L.
1999-01-01
The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.
Determination of heart rate variability with an electronic stethoscope.
Kamran, Haroon; Naggar, Isaac; Oniyuke, Francisca; Palomeque, Mercy; Chokshi, Priya; Salciccioli, Louis; Stewart, Mark; Lazar, Jason M
2013-02-01
Heart rate variability (HRV) is widely used to characterize cardiac autonomic function by measuring beat-to-beat alterations in heart rate. Decreased HRV has been found predictive of worse cardiovascular (CV) outcomes. HRV is determined from time intervals between QRS complexes recorded by electrocardiography (ECG) for several minutes to 24 h. Although cardiac auscultation with a stethoscope is performed routinely on patients, the human ear cannot detect heart sound time intervals. The electronic stethoscope digitally processes heart sounds, from which cardiac time intervals can be obtained. Accordingly, the objective of this study was to determine the feasibility of obtaining HRV from electronically recorded heart sounds. We prospectively studied 50 subjects with and without CV risk factors/disease and simultaneously recorded single lead ECG and heart sounds for 2 min. Time and frequency measures of HRV were calculated from R-R and S1-S1 intervals and were compared using intra-class correlation coefficients (ICC). The majority of the indices were strongly correlated (ICC 0.73-1.0), while the remaining indices were moderately correlated (ICC 0.56-0.63). In conclusion, we found HRV measures determined from S1-S1 are in agreement with those determined by single lead ECG, and we demonstrate and discuss differences in the measures in detail. In addition to characterizing cardiac murmurs and time intervals, the electronic stethoscope holds promise as a convenient low-cost tool to determine HRV in the hospital and outpatient settings as a practical extension of the physical examination.
Olsen, Anne-Marie Schjerning; Fosbøl, Emil L; Lindhardsen, Jesper; Folke, Fredrik; Charlot, Mette; Selmer, Christian; Bjerring Olesen, Jonas; Lamberts, Morten; Ruwald, Martin H; Køber, Lars; Hansen, Peter R; Torp-Pedersen, Christian; Gislason, Gunnar H
2012-10-16
The cardiovascular risk after the first myocardial infarction (MI) declines rapidly during the first year. We analyzed whether the cardiovascular risk associated with using nonsteroidal anti-inflammatory drugs (NSAIDs) was associated with the time elapsed following first-time MI. We identified patients aged 30 years or older admitted with first-time MI in 1997 to 2009 and subsequent NSAID use by individual-level linkage of nationwide registries of hospitalization and drug dispensing from pharmacies in Denmark. We calculated the incidence rates of death and a composite end point of coronary death or nonfatal recurrent MIs associated with NSAID use in 1-year time intervals up to 5 years after inclusion and analyzed risk by using multivariable adjusted time-dependent Cox proportional hazards models. Of the 99 187 patients included, 43 608 (44%) were prescribed NSAIDs after the index MI. There were 36 747 deaths and 28 693 coronary deaths or nonfatal recurrent MIs during the 5 years of follow-up. Relative to noncurrent treatment with NSAIDs, the use of any NSAID in the years following MI was persistently associated with an increased risk of death (hazard ratio 1.59 [95% confidence interval, 1.49-1.69]) after 1 year and hazard ratio 1.63 [95% confidence interval, 1.52-1.74] after 5 years) and coronary death or nonfatal recurrent MI (hazard ratio, 1.30 [95% confidence interval,l 1.22-1.39] and hazard ratio, 1.41 [95% confidence interval, 1.28-1.55]). The use of NSAIDs is associated with persistently increased coronary risk regardless of time elapsed after first-time MI. We advise long-term caution in the use of NSAIDs for patients after MI.
Lührs, Michael; Goebel, Rainer
2017-10-01
Turbo-Satori is a neurofeedback and brain-computer interface (BCI) toolbox for real-time functional near-infrared spectroscopy (fNIRS). It incorporates multiple pipelines from real-time preprocessing and analysis to neurofeedback and BCI applications. The toolbox is designed with a focus in usability, enabling a fast setup and execution of real-time experiments. Turbo-Satori uses an incremental recursive least-squares procedure for real-time general linear model calculation and support vector machine classifiers for advanced BCI applications. It communicates directly with common NIRx fNIRS hardware and was tested extensively ensuring that the calculations can be performed in real time without a significant change in calculation times for all sampling intervals during ongoing experiments of up to 6 h of recording. Enabling immediate access to advanced processing features also allows the use of this toolbox for students and nonexperts in the field of fNIRS data acquisition and processing. Flexible network interfaces allow third party stimulus applications to access the processed data and calculated statistics in real time so that this information can be easily incorporated in neurofeedback or BCI presentations.
Dynamic response analysis of structure under time-variant interval process model
NASA Astrophysics Data System (ADS)
Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao
2016-10-01
Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.
Research of the orbital evolution of asteroid 2012 DA14 (in Russian)
NASA Astrophysics Data System (ADS)
Zausaev, A. F.; Denisov, S. S.; Derevyanka, A. E.
Research of the orbital evolution of asteroid 2012 DA14 on the time interval from 1800 to 2206 is made, an object close approaches with Earth and the Moon are detected, the probability of impact with Earth is calculated. The used mathematical model is consistent with the DE405, the integration was performed using a modified Everhart's method of 27th order, the probability of collision is calculated using the Monte Carlo method.
Persistence of non-Markovian Gaussian stationary processes in discrete time
NASA Astrophysics Data System (ADS)
Nyberg, Markus; Lizana, Ludvig
2018-04-01
The persistence of a stochastic variable is the probability that it does not cross a given level during a fixed time interval. Although persistence is a simple concept to understand, it is in general hard to calculate. Here we consider zero mean Gaussian stationary processes in discrete time n . Few results are known for the persistence P0(n ) in discrete time, except the large time behavior which is characterized by the nontrivial constant θ through P0(n ) ˜θn . Using a modified version of the independent interval approximation (IIA) that we developed before, we are able to calculate P0(n ) analytically in z -transform space in terms of the autocorrelation function A (n ) . If A (n )→0 as n →∞ , we extract θ numerically, while if A (n )=0 , for finite n >N , we find θ exactly (within the IIA). We apply our results to three special cases: the nearest-neighbor-correlated "first order moving average process", where A (n )=0 for n >1 , the double exponential-correlated "second order autoregressive process", where A (n ) =c1λ1n+c2λ2n , and power-law-correlated variables, where A (n ) ˜n-μ . Apart from the power-law case when μ <5 , we find excellent agreement with simulations.
The Innisfree meteorite: Dynamical history of the orbit - Possible family of meteor bodies
NASA Astrophysics Data System (ADS)
Galibina, I. V.; Terent'eva, A. K.
1987-09-01
Evolution of the Innisfree meteorite orbit caused by secular perturbations is studied over the time interval of 500000 yrs (from the current epoch backwards). Calculations are made by the Gauss-Halphen-Gorjatschew method taking into account perturbations from the four outer planets - Jupiter, Saturn, Uranus and Neptune. In the above mentioned time interval the meteorite orbit has undergone no essential transformations. The Innisfree orbit intersected in 91 cases the Earth orbit and in 94 - the Mars orbit. A system of small and large meteor bodies (producing ordinary meteors and fireballs) which may be genetically related to the Innisfree meteorite has been found, i.e. there probably exists an Innisfree family of meteor bodies.
Forensic use of the Greulich and Pyle atlas: prediction intervals and relevance.
Chaumoitre, K; Saliba-Serre, B; Adalian, P; Signoli, M; Leonetti, G; Panuel, M
2017-03-01
The Greulich and Pyle (GP) atlas is one of the most frequently used methods of bone age (BA) estimation. Our aim is to assess its accuracy and to calculate the prediction intervals at 95% for forensic use. The study was conducted on a multi-ethnic sample of 2614 individuals (1423 boys and 1191 girls) referred to the university hospital of Marseille (France) for simple injuries. Hand radiographs were analysed using the GP atlas. Reliability of GP atlas and agreement between BA and chronological age (CA) were assessed and prediction intervals at 95% were calculated. The repeatability was excellent and the reproducibility was good. Pearson's linear correlation coefficient between CA and BA was 0.983. The mean difference between BA and CA was -0.18 years (boys) and 0.06 years (girls). The prediction interval at 95% for CA was given for each GP category and ranged between 1.2 and more than 4.5 years. The GP atlas is a reproducible and repeatable method that is still accurate for the present population, with a high correlation between BA and CA. The prediction intervals at 95% are wide, reflecting individual variability, and should be known when the method is used in forensic cases. • The GP atlas is still accurate at the present time. • There is a high correlation between bone age and chronological age. • Individual variability must be known when GP is used in forensic cases. • Prediction intervals (95%) are large; around 4 years after 10 year olds.
Mahoney, Martin C; Va, Puthiery; Stevens, Adrian; Kahn, Amy R; Michalek, Arthur M
2009-01-01
This manuscript examines shifts in patterns of cancer incidence among the Seneca Nation of Indians (SNI) for the interval 1955-1969 compared to 1990-2004. A retrospective cohort design was used to examine cancer incidence among the SNI during 2 time intervals: 1955-1969 and 1990-2004. Person-years at risk were multiplied by cancer incidence rates for New York State, exclusive of New York City, over 5-year intervals. A computer-aided match with the New York State Cancer Registry was used to identify incident cancers. Overall and site-specific standardized incidence ratios (SIRs = observed/expected x 100), and 95% confidence intervals (CIs), were calculated for both time periods. During the earlier interval, deficits in overall cancer incidence were noted among males (SIR = 56, CI 36-82) and females (SIR = 71, CI 50-98), and for female breast cancers (SIR = 21, CI 4-62). During the more recent intervals, deficits in overall cancer incidence persisted among both genders (males SIR = 63, CI 52-77; females SIR = 67, CI 55-80). Deficits were also noted among males for cancers of the lung (SIR = 60, CI 33-98), prostate (SIR = 51, CI = 33-76) and bladder (SIR = 17, CI = 2-61) and among females for breast (SIR = 33, CI = 20-53) and uterus (SIR = 36, CI = 10-92). No cancer sites demonstrated increased incidence. Persons ages 60-69 years, 70-79 years, and ages 80+ years tended to exhibit deficits in overall incidence. Despite marked changes over time, deficits in overall cancer incidence have persisted between the time intervals studied. Tribal-specific cancer data are important for the development and implementation of comprehensive cancer control plans which align with local needs.
Estimation of the cloud transmittance from radiometric measurements at the ground level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Dario; Mares, Oana, E-mail: mareshoana@yahoo.com
2014-11-24
The extinction of solar radiation due to the clouds is more significant than due to any other atmospheric constituent, but it is always difficult to be modeled because of the random distribution of clouds on the sky. Moreover, the transmittance of a layer of clouds is in a very complex relation with their type and depth. A method for estimating cloud transmittance was proposed in Paulescu et al. (Energ. Convers. Manage, 75 690–697, 2014). The approach is based on the hypothesis that the structure of the cloud covering the sun at a time moment does not change significantly in amore » short time interval (several minutes). Thus, the cloud transmittance can be calculated as the estimated coefficient of a simple linear regression for the computed versus measured solar irradiance in a time interval Δt. The aim of this paper is to optimize the length of the time interval Δt. Radiometric data measured on the Solar Platform of the West University of Timisoara during 2010 at a frequency of 1/15 seconds are used in this study.« less
Estimation of the cloud transmittance from radiometric measurements at the ground level
NASA Astrophysics Data System (ADS)
Costa, Dario; Mares, Oana
2014-11-01
The extinction of solar radiation due to the clouds is more significant than due to any other atmospheric constituent, but it is always difficult to be modeled because of the random distribution of clouds on the sky. Moreover, the transmittance of a layer of clouds is in a very complex relation with their type and depth. A method for estimating cloud transmittance was proposed in Paulescu et al. (Energ. Convers. Manage, 75 690-697, 2014). The approach is based on the hypothesis that the structure of the cloud covering the sun at a time moment does not change significantly in a short time interval (several minutes). Thus, the cloud transmittance can be calculated as the estimated coefficient of a simple linear regression for the computed versus measured solar irradiance in a time interval Δt. The aim of this paper is to optimize the length of the time interval Δt. Radiometric data measured on the Solar Platform of the West University of Timisoara during 2010 at a frequency of 1/15 seconds are used in this study.
A Microsoft Excel® 2010 Based Tool for Calculating Interobserver Agreement
Azulay, Richard L
2011-01-01
This technical report provides detailed information on the rationale for using a common computer spreadsheet program (Microsoft Excel®) to calculate various forms of interobserver agreement for both continuous and discontinuous data sets. In addition, we provide a brief tutorial on how to use an Excel spreadsheet to automatically compute traditional total count, partial agreement-within-intervals, exact agreement, trial-by-trial, interval-by-interval, scored-interval, unscored-interval, total duration, and mean duration-per-interval interobserver agreement algorithms. We conclude with a discussion of how practitioners may integrate this tool into their clinical work. PMID:22649578
A microsoft excel(®) 2010 based tool for calculating interobserver agreement.
Reed, Derek D; Azulay, Richard L
2011-01-01
This technical report provides detailed information on the rationale for using a common computer spreadsheet program (Microsoft Excel(®)) to calculate various forms of interobserver agreement for both continuous and discontinuous data sets. In addition, we provide a brief tutorial on how to use an Excel spreadsheet to automatically compute traditional total count, partial agreement-within-intervals, exact agreement, trial-by-trial, interval-by-interval, scored-interval, unscored-interval, total duration, and mean duration-per-interval interobserver agreement algorithms. We conclude with a discussion of how practitioners may integrate this tool into their clinical work.
Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Wada, Takao
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
Percolation flux and Transport velocity in the unsaturated zone, Yucca Mountain, Nevada
Yang, I.C.
2002-01-01
The percolation flux for borehole USW UZ-14 was calculated from 14C residence times of pore water and water content of cores measured in the laboratory. Transport velocity is calculated from the depth interval between two points divided by the difference in 14C residence times. Two methods were used to calculate the flux and velocity. The first method uses the 14C data and cumulative water content data directly in the incremental intervals in the Paintbrush nonwelded unit and the Topopah Spring welded unit. The second method uses the regression relation for 14C data and cumulative water content data for the entire Paintbrush nonwelded unit and the Topopah Spring Tuff/Topopah Spring welded unit. Using the first method, for the Paintbrush nonwelded unit in boreholeUSW UZ-14 percolation flux ranges from 2.3 to 41.0 mm/a. Transport velocity ranges from 1.2 to 40.6 cm/a. For the Topopah Spring welded unit percolation flux ranges from 0.9 to 5.8 mm/a in the 8 incremental intervals calculated. Transport velocity ranges from 1.4 to 7.3 cm/a in the 8 incremental intervals. Using the second method, average percolation flux in the Paintbrush nonwelded unit for 6 boreholes ranges from 0.9 to 4.0 mm/a at the 95% confidence level. Average transport velocity ranges from 0.6 to 2.6 cm/a. For the Topopah Spring welded unit and Topopah Spring Tuff, average percolation flux in 5 boreholes ranges from 1.3 to 3.2 mm/a. Average transport velocity ranges from 1.6 to 4.0 cm/a. Both the average percolation flux and average transport velocity in the PTn are smaller than in the TS/TSw. However, the average minimum and average maximum values for the percolation flux in the TS/TSw are within the PTn average range. Therefore, differences in the percolation flux in the two units are not significant. On the other hand, average, average minimum, and average maximum transport velocities in the TS/TSw unit are all larger than the PTn values, implying a larger transport velocity for the TS/TSw although there is a small overlap.
Katki, Hormuzd A.; Cheung, Li C.; Fetterman, Barbara; Castle, Philip E.; Sundaram, Rajeshwari
2014-01-01
Summary New cervical cancer screening guidelines in the US and many European countries recommend that women get tested for human papillomavirus (HPV). To inform decisions about screening intervals, we calculate the increase in precancer/cancer risk per year of continued HPV infection. However, both time to onset of precancer/cancer and time to HPV clearance are interval-censored, and onset of precancer/cancer strongly informatively censors HPV clearance. We analyze this bivariate informatively interval-censored data by developing a novel joint model for time to clearance of HPV and time to precancer/cancer using shared random-effects, where the estimated mean duration of each woman’s HPV infection is a covariate in the submodel for time to precancer/cancer. The model was fit to data on 9,553 HPV-positive/Pap-negative women undergoing cervical cancer screening at Kaiser Permanente Northern California, data that were pivotal to the development of US screening guidelines. We compare the implications for screening intervals of this joint model to those from population-average marginal models of precancer/cancer risk. In particular, after 2 years the marginal population-average precancer/cancer risk was 5%, suggesting a 2-year interval to control population-average risk at 5%. In contrast, the joint model reveals that almost all women exceeding 5% individual risk in 2 years also exceeded 5% in 1 year, suggesting that a 1-year interval is better to control individual risk at 5%. The example suggests that sophisticated risk models capable of predicting individual risk may have different implications than population-average risk models that are currently used for informing medical guideline development. PMID:26556961
Katki, Hormuzd A; Cheung, Li C; Fetterman, Barbara; Castle, Philip E; Sundaram, Rajeshwari
2015-10-01
New cervical cancer screening guidelines in the US and many European countries recommend that women get tested for human papillomavirus (HPV). To inform decisions about screening intervals, we calculate the increase in precancer/cancer risk per year of continued HPV infection. However, both time to onset of precancer/cancer and time to HPV clearance are interval-censored, and onset of precancer/cancer strongly informatively censors HPV clearance. We analyze this bivariate informatively interval-censored data by developing a novel joint model for time to clearance of HPV and time to precancer/cancer using shared random-effects, where the estimated mean duration of each woman's HPV infection is a covariate in the submodel for time to precancer/cancer. The model was fit to data on 9,553 HPV-positive/Pap-negative women undergoing cervical cancer screening at Kaiser Permanente Northern California, data that were pivotal to the development of US screening guidelines. We compare the implications for screening intervals of this joint model to those from population-average marginal models of precancer/cancer risk. In particular, after 2 years the marginal population-average precancer/cancer risk was 5%, suggesting a 2-year interval to control population-average risk at 5%. In contrast, the joint model reveals that almost all women exceeding 5% individual risk in 2 years also exceeded 5% in 1 year, suggesting that a 1-year interval is better to control individual risk at 5%. The example suggests that sophisticated risk models capable of predicting individual risk may have different implications than population-average risk models that are currently used for informing medical guideline development.
Zero-crossing statistics for non-Markovian time series.
Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias
2018-03-01
In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.
Zero-crossing statistics for non-Markovian time series
NASA Astrophysics Data System (ADS)
Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias
2018-03-01
In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.
Automated Interval velocity picking for Atlantic Multi-Channel Seismic Data
NASA Astrophysics Data System (ADS)
Singh, Vishwajit
2016-04-01
This paper described the challenge in developing and testing a fully automated routine for measuring interval velocities from multi-channel seismic data. Various approaches are employed for generating an interactive algorithm picking interval velocity for continuous 1000-5000 normal moveout (NMO) corrected gather and replacing the interpreter's effort for manual picking the coherent reflections. The detailed steps and pitfalls for picking the interval velocities from seismic reflection time measurements are describe in these approaches. Key ingredients these approaches utilized for velocity analysis stage are semblance grid and starting model of interval velocity. Basin-Hopping optimization is employed for convergence of the misfit function toward local minima. SLiding-Overlapping Window (SLOW) algorithm are designed to mitigate the non-linearity and ill- possessedness of root-mean-square velocity. Synthetic data case studies addresses the performance of the velocity picker generating models perfectly fitting the semblance peaks. A similar linear relationship between average depth and reflection time for synthetic model and estimated models proposed picked interval velocities as the starting model for the full waveform inversion to project more accurate velocity structure of the subsurface. The challenges can be categorized as (1) building accurate starting model for projecting more accurate velocity structure of the subsurface, (2) improving the computational cost of algorithm by pre-calculating semblance grid to make auto picking more feasible.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-01-01
The paper demonstrates the possibility of calculating the characteristics of the flow of visitors to objects carrying out mass events passing through checkpoints. The mathematical model is based on the non-stationary queuing system (NQS) where dependence of requests input rate from time is described by the function. This function was chosen in such way that its properties were similar to the real dependencies of speed of visitors arrival on football matches to the stadium. A piecewise-constant approximation of the function is used when statistical modeling of NQS performing. Authors calculated the dependencies of the queue length and waiting time for visitors to service (time in queue) on time for different laws. Time required to service the entire queue and the number of visitors entering the stadium at the beginning of the match were calculated too. We found the dependence for macroscopic quantitative characteristics of NQS from the number of averaging sections of the input rate.
Pavlickova, Hana; Bremner, Stephen A; Priebe, Stefan
2015-08-01
A recent cluster-randomized controlled trial found that offering financial incentives improves adherence to long-acting injectable antipsychotics (LAIs). The present study investigates whether the impact of incentives diminishes over time and whether the improvement in adherence is linked to the amount of incentives offered. Seventy-three teams with 141 patients with psychotic disorders (using ICD-10) were randomized to the intervention or control group. Over 1 year, patients in the intervention group received £15 (US $23) for each LAI, while control patients received treatment as usual. Adherence levels, ie, the percentage of prescribed LAIs that were received, were calculated for quarterly intervals. The amount of incentives offered was calculated from the treatment cycle at baseline. Multilevel models were used to examine the time course of the effect of incentives and the effect of the amount of incentives offered on adherence. Adherence increased in both the intervention and the control group over time by an average of 4.2% per quarterly interval (95% CI, 2.8%-5.6%; P < .001). Despite this general increase, adherence in the intervention group remained improved compared to the control group by between 11% and 14% per quarterly interval. There was no interaction effect between time and treatment group. Further, a higher total amount of incentives was associated with poorer adherence (βbootstrapped = -0.11; 95% CIbootstrapped, -0.20 to -0.01; P = .023). A substantial effect of financial incentives on adherence to LAIs occurs within the first 3 months of the intervention and is sustained over 1 year. A higher total amount of incentives does not increase the effect. ISRCTN.com identifier: ISRCTN77769281 and UKCRN.org identifier: 7033. © Copyright 2015 Physicians Postgraduate Press, Inc.
NASA Technical Reports Server (NTRS)
Robinson, John E.
2014-01-01
The Federal Aviation Administration's Next Generation Air Transportation System will combine advanced air traffic management technologies, performance-based procedures, and state-of-the-art avionics to maintain efficient operations throughout the entire arrival phase of flight. Flight deck Interval Management (FIM) operations are expected to use sophisticated airborne spacing capabilities to meet precise in-trail spacing from top-of-descent to touchdown. Recent human-in-the-loop simulations by the National Aeronautics and Space Administration have found that selection of the assigned spacing goal using the runway schedule can lead to premature interruptions of the FIM operation during periods of high traffic demand. This study compares three methods for calculating the assigned spacing goal for a FIM operation that is also subject to time-based metering constraints. The particular paradigms investigated include: one based upon the desired runway spacing interval, one based upon the desired meter fix spacing interval, and a composite method that combines both intervals. These three paradigms are evaluated for the primary arrival procedures to Phoenix Sky Harbor International Airport using the entire set of Rapid Update Cycle wind forecasts from 2011. For typical meter fix and runway spacing intervals, the runway- and meter fix-based paradigms exhibit moderate FIM interruption rates due to their inability to consider multiple metering constraints. The addition of larger separation buffers decreases the FIM interruption rate but also significantly reduces the achievable runway throughput. The composite paradigm causes no FIM interruptions, and maintains higher runway throughput more often than the other paradigms. A key implication of the results with respect to time-based metering is that FIM operations using a single assigned spacing goal will not allow reduction of the arrival schedule's excess spacing buffer. Alternative solutions for conducting the FIM operation in a manner more compatible with the arrival schedule are discussed in detail.
A validation of ground ambulance pre-hospital times modeled using geographic information systems.
Patel, Alka B; Waters, Nigel M; Blanchard, Ian E; Doig, Christopher J; Ghali, William A
2012-10-03
Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7-8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a non-US context. The preference for researchers should be to use actual EMS trip records from the proposed research study area. In the absence of EMS trip data researchers should determine which modeling assumptions more accurately reflect the EMS protocols across their study area.
Zinka, Bettina; Kandlbinder, Robert; Schupfner, Robert; Haas, Gerald; Wolfbeis, Otto S; Graw, Matthias
2012-01-01
Reliable determination of time since death in human skeletons or single bones often is limited by methodically difficulties. Determination of the specific activity ratio of natural radionuclides, in particular of 232Th (Thorium), 228Th and 228Ra (Radium) seems to be a new appropriate method to calculate the post mortem interval. These radionuclides are incorporated by any human being, mainly from food. So with an individual's death the uptake of radionuclides ends. But the decay of 232Th produces 228Ra and 228Th due to its decay series, whereas 228Th is continuously built up in the human's bones. Thus, it can be concluded that in all deceased humans at different times after death different activity ratios of 228Th to 228Ra will develop in bone. According to this fact it should be possible to calculate time since death of an individual by first analysing the specific activities of 228Th and 228Ra in bones of deceased and then determining the 228Th/228Ra activity ratio, which can be assigned to a certain post-mortem interval.
Poser, H; Russello, G; Zanella, A; Bellini, L; Gelli, D
2011-12-01
Echocardiographic evaluation was performed in six healthy young adult non-sedated terrapins (Trachemys scripta elegans). The best imaging quality was obtained through the right cervical window. Base-apex inflow and outflow views were recorded, ventricular size, ventricular wall thickness and ventricular outflow tract were measured, and fractional shortening was calculated. Pulsed-wave Doppler interrogation enabled the diastolic biphasic atrio-ventricular flow and the systolic ventricular outflow patterns to be recorded. The following Doppler-derived functional parameters were calculated: early diastolic (E) and late diastolic (A) wave peak velocities, E/A ratio, ventricular outflow systolic peak and mean velocities and gradients, Velocity-Time Integral, acceleration and deceleration times, and Ejection Time. For each parameter the mean, standard deviation and 95% confidence interval were calculated. Echocardiography resulted as a useful and easy-to-perform diagnostic tool in this poorly known species that presents difficulties during evaluation.
Epstein, R H; Dexter, F
2012-09-01
Perioperative interruptions generated electronically from anaesthesia information management systems (AIMS) can provide useful feedback, but may adversely affect task performance if distractions occur at inopportune moments. Ideally such interruptions would occur only at times when their impact would be minimal. In this study of AIMS data, we evaluated the times of comments, drugs, fluids and periodic assessments (e.g. electrocardiogram diagnosis and train-of-four) to develop recommendations for the timing of interruptions during the intraoperative period. The 39,707 cases studied were divided into intervals between: 1) enter operating room; 2) induction; 3) intubation; 4) surgical incision; and 5) end surgery. Five-minute intervals of no documentation were determined for each case. The offsets from the start of each interval when >50% of ongoing cases had completed initial documentation were calculated (MIN50). The primary endpoint for each interval was the percentage of all cases still ongoing at MIN50. Results were that the intervals from entering the operating room to induction and from induction to intubation were unsuitable for interruptions confirming prior observational studies of anaesthesia workload. At least 13 minutes after surgical incision was the most suitable time for interruptions with 92% of cases still ongoing. Timing was minimally affected by the type of anaesthesia, surgical facility, surgical service, prone positioning or scheduled case duration. The implication of our results is that for mediated interruptions, waiting at least 13 minutes after the start of surgery is appropriate. Although we used AIMS data, operating room information system data is also suitable.
Nonparametric methods in actigraphy: An update
Gonçalves, Bruno S.B.; Cavalcanti, Paula R.A.; Tavares, Gracilene R.; Campos, Tania F.; Araujo, John F.
2014-01-01
Circadian rhythmicity in humans has been well studied using actigraphy, a method of measuring gross motor movement. As actigraphic technology continues to evolve, it is important for data analysis to keep pace with new variables and features. Our objective is to study the behavior of two variables, interdaily stability and intradaily variability, to describe rest activity rhythm. Simulated data and actigraphy data of humans, rats, and marmosets were used in this study. We modified the method of calculation for IV and IS by modifying the time intervals of analysis. For each variable, we calculated the average value (IVm and ISm) results for each time interval. Simulated data showed that (1) synchronization analysis depends on sample size, and (2) fragmentation is independent of the amplitude of the generated noise. We were able to obtain a significant difference in the fragmentation patterns of stroke patients using an IVm variable, while the variable IV60 was not identified. Rhythmic synchronization of activity and rest was significantly higher in young than adults with Parkinson׳s when using the ISM variable; however, this difference was not seen using IS60. We propose an updated format to calculate rhythmic fragmentation, including two additional optional variables. These alternative methods of nonparametric analysis aim to more precisely detect sleep–wake cycle fragmentation and synchronization. PMID:26483921
Wang, Qiuying; Guo, Zheng; Sun, Zhiguo; Cui, Xufei; Liu, Kaiyue
2018-01-01
Pedestrian-positioning technology based on the foot-mounted micro inertial measurement unit (MIMU) plays an important role in the field of indoor navigation and has received extensive attention in recent years. However, the positioning accuracy of the inertial-based pedestrian-positioning method is rapidly reduced because of the relatively low measurement accuracy of the measurement sensor. The zero-velocity update (ZUPT) is an error correction method which was proposed to solve the cumulative error because, on a regular basis, the foot is stationary during the ordinary gait; this is intended to reduce the position error growth of the system. However, the traditional ZUPT has poor performance because the time of foot touchdown is short when the pedestrians move faster, which decreases the positioning accuracy. Considering these problems, a forward and reverse calculation method based on the adaptive zero-velocity interval adjustment for the foot-mounted MIMU location method is proposed in this paper. To solve the inaccuracy of the zero-velocity interval detector during fast pedestrian movement where the contact time of the foot on the ground is short, an adaptive zero-velocity interval detection algorithm based on fuzzy logic reasoning is presented in this paper. In addition, to improve the effectiveness of the ZUPT algorithm, forward and reverse multiple solutions are presented. Finally, with the basic principles and derivation process of this method, the MTi-G710 produced by the XSENS company is used to complete the test. The experimental results verify the correctness and applicability of the proposed method. PMID:29883399
H. T. Schreuder; M. S. Williams
2000-01-01
In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...
NASA Astrophysics Data System (ADS)
Sukhanov, AY
2017-02-01
We present an approximation Voigt contour for some parameters intervals such as the interval with y less than 0.02 and absolute value x less than 1.6 gives a simple formula for calculating and relative error less than 0.1%, and for some of the intervals suggetsted to use Hermite quadrature.
Are EUR and GBP different words for the same currency?
NASA Astrophysics Data System (ADS)
Ivanova, K.; Ausloos, M.
2002-05-01
The British Pound (GBP) is not part of the Euro (EUR) monetary system. In order to find out arguments on whether GBP should join the EUR or not correlations are calculated between GBP exchange rates with respect to various currencies: USD, JPY, CHF, DKK, the currencies forming EUR and a reconstructed EUR for the time interval from 1993 till June 30, 2000. The distribution of fluctuations of the exchange rates is Gaussian for the central part of the distribution, but has fat tails for the large size fluctuations. Within the Detrended Fluctuation Analysis (DFA) statistical method the power law behavior describing the root-mean-square deviation from a linear trend of the exchange rate fluctuations is obtained as a function of time for the time interval of interest. The time-dependent exponent evolution of the exchange rate fluctuations is given. Statistical considerations imply that the GBP is already behaving as a true EUR.
Graphing within-subjects confidence intervals using SPSS and S-Plus.
Wright, Daniel B
2007-02-01
Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.
Litwin, Ronald J.; Smoot, Joseph P.; Pavich, Milan J.; Markewich, Helaine W.; Oberg, Erik; Helwig, Ben; Steury, Brent; Santucci, Vincent L.; Durika, Nancy J.; Rybicki, Nancy B.; Engelhardt, Katharina M.; Sanders, Geoffrey; Verardo, Stacey; Elmore, Andrew J.; Gilmer, Joseph
2011-01-01
Photoanalysis of time-sequence aerial photographs of Dyke Marsh enabled us to calculate shoreline erosion estimates for this marsh over 19 years (1987-2006), as well as to quantify overall marsh acreage for 6 calendar years spanning an ~70 year interval (1937-2006). Photo overlay of a historic map enabled us to extend our whole-marsh acreage calculations back to 1883. Both sets of analyses were part of a geologic framework study in support of current efforts by the National Park Service (NPS) to restore this urban wetland. Two time intervals were selected for our shoreline erosion analyses, based on image quality and availability: 1987 to 2002, and 2002 to 2006. The more recent time interval shows a marked increase in erosion in the southern part of Dyke Marsh, following a wave-induced breach of a small peninsula that had protected its southern shoreline. Field observations and analyses of annual aerial imagery between 1987 and 2006 revealed a progressive increase in wave-induced erosion that presently is deconstructing Hog Island Gut, the last significant tidal creek network within the Dyke Marsh. These photo analyses documented an overall average westward shoreline loss of 6.0 to 7.8 linear feet per year along the Potomac River during this 19-year time interval. Additionally, photographic evidence documented that lateral erosion now is capturing existing higher order tributaries in the Hog Island Gut. Wave-driven stream piracy is fragmenting the remaining marsh habitat, and therefore its connectivity, relatively rapidly, causing the effective mouth of the Hog Island Gut tidal network to retreat headward visibly over the past several decades. Based on our estimates of total marsh area in the Dyke Marsh derived from 1987 aerial imagery, as much as 12 percent of the central part of the marsh has eroded in the 19 year period we studied (or ~7.5 percent of the original ~78.8 acres of 1987 marshland). Shoreline loss estimates for marsh parcels north and south of our study area have not yet been analyzed, although annual aerial photos from 1987 to 2002 confirm visible progressive shoreline loss in those areas over this same time interval.
Reference intervals for 24 laboratory parameters determined in 24-hour urine collections.
Curcio, Raffaele; Stettler, Helen; Suter, Paolo M; Aksözen, Jasmin Barman; Saleh, Lanja; Spanaus, Katharina; Bochud, Murielle; Minder, Elisabeth; von Eckardstein, Arnold
2016-01-01
Reference intervals for many laboratory parameters determined in 24-h urine collections are either not publicly available or based on small numbers, not sex specific or not from a representative sample. Osmolality and concentrations or enzymatic activities of sodium, potassium, chloride, glucose, creatinine, citrate, cortisol, pancreatic α-amylase, total protein, albumin, transferrin, immunoglobulin G, α1-microglobulin, α2-macroglobulin, as well as porphyrins and their precursors (δ-aminolevulinic acid and porphobilinogen) were determined in 241 24-h urine samples of a population-based cohort of asymptomatic adults (121 men and 120 women). For 16 of these 24 parameters creatinine-normalized ratios were calculated based on 24-h urine creatinine. The reference intervals for these parameters were calculated according to the CLSI C28-A3 statistical guidelines. By contrast to most published reference intervals, which do not stratify for sex, reference intervals of 12 of 24 laboratory parameters in 24-h urine collections and of eight of 16 parameters as creatinine-normalized ratios differed significantly between men and women. For six parameters calculated as 24-h urine excretion and four parameters calculated as creatinine-normalized ratios no reference intervals had been published before. For some parameters we found significant and relevant deviations from previously reported reference intervals, most notably for 24-h urine cortisol in women. Ten 24-h urine parameters showed weak or moderate sex-specific correlations with age. By applying up-to-date analytical methods and clinical chemistry analyzers to 24-h urine collections from a large population-based cohort we provide as yet the most comprehensive set of sex-specific reference intervals calculated according to CLSI guidelines for parameters determined in 24-h urine collections.
Cardiotachometer displays heart rate on a beat-to-beat basis
NASA Technical Reports Server (NTRS)
Rasquin, J. R.; Smith, H. E.; Taylor, R. A.
1974-01-01
Electronics for this system may be chosen so that complete calculation and display may be accomplished in a few milliseconds, far less than even the fastest heartbeat interval. Accuracy may be increased, if desired, by using higher-frequency timing oscillator, although this will require large capacity registers at increased cost.
Evaluation of inhomogeneities of repolarization in patients with psoriasis vulgaris
İnci, Sinan; Aksan, Gökhan; Nar, Gökay; Yüksel, Esra Pancar; Ocal, Hande Serra; Çapraz, Mustafa; Yüksel, Serkan; Şahin, Mahmut
2016-01-01
Introduction The arrhythmia potential has not been investigated adequately in psoriatic patients. In this study, we assessed the ventricular repolarization dispersion, using the Tp-e interval and the Tp-e/QT ratio, and investigated the association with inflammation. Material and methods Seventy-one psoriasis vulgaris patients and 70 age- and gender-matched healthy individuals were enrolled in the study. The severity of the disease was calculated using Psoriasis Area and Severity Index scoring. The QTd was defined as the difference between the maximum and minimum QT intervals. The Tp-e interval was defined as the interval from the peak of the T wave to the end of the T wave. The Tp-e interval was corrected for heart rate. The Tp-e/QT ratio was calculated using these measurements. Results There were no significant differences between the groups with respect to basal clinical and laboratory characteristics (p > 0.05). The Tp-e interval, the corrected Tp-e interval (cTp-e) and the Tp-e/QT ratio were also significantly higher in psoriasis patients compared to the control group (78.5 ±8.0 ms vs. 71.4 ±7.6 ms, p < 0.001, 86.3 ±13.2 ms vs. 77.6 ±9.0 ms, p < 0.001 and 0.21 ±0.02 vs. 0.19 ±0.02, p < 0.001 respectively). A significant correlation was detected between the cTp-e time and the Tp-e/QT ratio and the PASI score in the group of psoriatic patients (r = 0.51, p < 0.001; r = 0.59, p < 0.001, respectively). Conclusions In our study, we detected a significant increase in the Tp-e interval and the Tp-e/QT ratio in patients with psoriasis vulgaris. The Tp-e interval and the Tp-e/QT ratio may be predictors for ventricular arrhythmias in patients with psoriasis vulgaris. PMID:27904512
Numerical determination of vertical water flux based on soil temperature profiles
NASA Astrophysics Data System (ADS)
Tabbagh, Alain; Cheviron, Bruno; Henine, Hocine; Guérin, Roger; Bechkit, Mohamed-Amine
2017-07-01
High sensitivity temperature sensors (0.001 K sensitivity Pt100 thermistors), positioned at intervals of a few centimetres along a vertical soil profile, allow temperature measurements to be made which are sensitive to water flux through the soil. The development of high data storage capabilities now makes it possible to carry out in situ temperature recordings over long periods of time. By directly applying numerical models of convective and conductive heat transfer to experimental data recorded as a function of depth and time, it is possible to calculate Darcy's velocity from the convection transfer term, thus allowing water infiltration/exfiltration through the soil to be determined as a function of time between fixed depths. In the present study we consider temperature data recorded at the Boissy-le-Châtel (Seine et Marne, France) experimental station between April 16th, 2009 and March 8th, 2010, at six different depths and 10-min time intervals. We make use of two numerical finite element models to solve the conduction/convection heat transfer equation and compare their merits. These two models allow us to calculate the corresponding convective flux rate every day using a group of three sensors. The comparison of the two series of calculated values centred at 24 cm shows reliable results for periods longer than 8 days. These results are transformed in infiltration/exfiltration value after determining the soil volumetric heat capacity. The comparison with the rainfall and evaporation data for periods of ten days shows a close accordance with the behaviour of the system governed by rainfall evaporation rate during winter and spring.
Incorporating temporal and clinical reasoning in a new measure of continuity of care.
Spooner, S. A.
1994-01-01
Previously described quantitative methods for measuring continuity of care have assumed that perfect continuity exists when a patient sees only one provider, regardless of the temporal pattern and clinical context of the visits. This paper describes an implementation of a new operational model of continuity--the Temporal Continuity Index--that takes into account time intervals between well visits in a pediatric residency continuity clinic. Ideal continuity in this model is achieved when intervals between visits are appropriate based on the age of the patient and clinical context of the encounters. The fundamental concept in this model is the expectation interval, which contains the length of the maximum ideal follow-up interval for a visit and the maximum follow-up interval. This paper describes an initial implementation of the TCI model and compares TCI calculations to previous quantitative methods and proposes its use as part of the assessment of resident education in outpatient settings. PMID:7950019
NASA Technical Reports Server (NTRS)
Reiff, P. H.; Spiro, R. W.; Wolf, R. A.; Kamide, Y.; King, J. H.
1985-01-01
It is pointed out that the maximum electrostatic potential difference across the polar cap, Phi, is a fundamental measure of the coupling between the solar wind and the earth's magnetosphere/ionosphere sytem. During the Coordinated Data Analysis Workshop (CDAW) 6 intervals, no suitably instrumented spacecraft was in an appropriate orbit to determine the polar-cap potential drop directly. However, two recently developed independent techniques make it possible to estimate the polar-cap potential drop for times when direct spacecraft data are not available. The present investigation is concerned with a comparison of cross-polar-cap potential drop estimates calculated for the two CDAW 6 intervals on the basis of these two techniques. In the case of one interval, the agreement between the potential drops and Joule heating rates is relatively good. In the second interval, however, the agreement is not very good. Explanations for this discrepancy are discussed.
Black, Andrew J.; Ross, Joshua V.
2013-01-01
The clinical serial interval of an infectious disease is the time between date of symptom onset in an index case and the date of symptom onset in one of its secondary cases. It is a quantity which is commonly collected during a pandemic and is of fundamental importance to public health policy and mathematical modelling. In this paper we present a novel method for calculating the serial interval distribution for a Markovian model of household transmission dynamics. This allows the use of Bayesian MCMC methods, with explicit evaluation of the likelihood, to fit to serial interval data and infer parameters of the underlying model. We use simulated and real data to verify the accuracy of our methodology and illustrate the importance of accounting for household size. The output of our approach can be used to produce posterior distributions of population level epidemic characteristics. PMID:24023679
Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.
Kis, Maria
2005-01-01
In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.
van Oostrum, Jeroen M; Van Houdenhoven, Mark; Vrielink, Manon M J; Klein, Jan; Hans, Erwin W; Klimek, Markus; Wullink, Gerhard; Steyerberg, Ewout W; Kazemier, Geert
2008-11-01
Hospitals that perform emergency surgery during the night (e.g., from 11:00 pm to 7:30 am) face decisions on optimal operating room (OR) staffing. Emergency patients need to be operated on within a predefined safety window to decrease morbidity and improve their chances of full recovery. We developed a process to determine the optimal OR team composition during the night, such that staffing costs are minimized, while providing adequate resources to start surgery within the safety interval. A discrete event simulation in combination with modeling of safety intervals was applied. Emergency surgery was allowed to be postponed safely. The model was tested using data from the main OR of Erasmus University Medical Center (Erasmus MC). Two outcome measures were calculated: violation of safety intervals and frequency with which OR and anesthesia nurses were called in from home. We used the following input data from Erasmus MC to estimate distributions of all relevant parameters in our model: arrival times of emergency patients, durations of surgical cases, length of stay in the postanesthesia care unit, and transportation times. In addition, surgeons and OR staff of Erasmus MC specified safety intervals. Reducing in-house team members from 9 to 5 increased the fraction of patients treated too late by 2.5% as compared to the baseline scenario. Substantially more OR and anesthesia nurses were called in from home when needed. The use of safety intervals benefits OR management during nights. Modeling of safety intervals substantially influences the number of emergency patients treated on time. Our case study showed that by modeling safety intervals and applying computer simulation, an OR can reduce its staff on call without jeopardizing patient safety.
Program for Weibull Analysis of Fatigue Data
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2005-01-01
A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.
Dynamic MRI for distinguishing high-flow from low-flow peripheral vascular malformations.
Ohgiya, Yoshimitsu; Hashimoto, Toshi; Gokan, Takehiko; Watanabe, Shouji; Kuroda, Masayoshi; Hirose, Masanori; Matsui, Seishi; Nobusawa, Hiroshi; Kitanosono, Takashi; Munechika, Hirotsugu
2005-11-01
The purpose of our study was to assess the usefulness of dynamic MRI in distinguishing high-flow vascular malformations from low-flow vascular malformations, which do not need angiography for treatment. Between September 2001 and January 2003, 16 patients who underwent conventional and dynamic MRI had peripheral vascular malformations (six high- and 10 low-flow). The temporal resolution of dynamic MRI was 5 sec. Time intervals between beginning of enhancement of an arterial branch in the vicinity of a lesion in the same slice and the onset of enhancement in the lesion were calculated. We defined these time intervals as "artery-lesion enhancement time." Time intervals between the onset of enhancement in the lesion and the time of the maximal percentage of enhancement above baseline of the lesion within 120 sec were measured. We defined these time intervals as "contrast rise time" of the lesion. Diagnosis of the peripheral vascular malformations was based on angiographic or venographic findings. The mean artery-lesion enhancement time of the high-flow vascular malformations (3.3 sec [range, 0-5 sec]) was significantly shorter than that of the low-flow vascular malformations (8.8 sec [range, 0-20 sec]) (Mann-Whitney test, p < 0.05). The mean maximal lesion enhancement time of the high-flow vascular malformations (5.8 sec [range, 5-10 sec]) was significantly shorter than that of the low-flow vascular malformations (88.4 sec [range, 50-100 sec]) (Mann-Whitney test, p < 0.01). Dynamic MRI is useful for distinguishing high-flow from low-flow vascular malformations, especially when the contrast rise time of the lesion is measured.
Atmospheric Fluctuation Measurements with the Palomar Testbed Interferometer
NASA Astrophysics Data System (ADS)
Linfield, R. P.; Lane, B. F.; Colavita, M. M.; PTI Collaboration
Observations of bright stars with the Palomar Testbed Interferometer, at a wavelength of 2.2 microns, have been used to measure atmospheric delay fluctuations. The delay structure function Dτ(Δ t) was calculated for 66 scans (each >= 120s in length) on seven nights in 1997 and one in 1998. For all except one scan, Dτ exhibited a clean power law shape over the time interval 50-500 msec. Over shorter time intervals, the effect of the delay line servo loop corrupts Dτ. Over longer time intervals (usually starting at > 1s), the slope of Dτ decreases, presumably due to some combination of saturation e.g. finite turbulent layer thickness) and the effect of the finite wind speed crossing time on our 110 m baseline. The mean power law slopes for the eight nights ranged from 1.16 to 1.36, substantially flatter than the value of 1.67 for three dimensional Kolmogorov turbulence. Such sub-Kolmogorov slopes will result in atmospheric seeling (θ) that improves rapidly with increasing wavelength: θ propto λ1-(2β), where β is the observed power law slope of Dτ. The atmospheric errors in astrometric measurements with an interferometer will average down more quickly than in the Kolmogorov case.
Wei, Shaoceng; Kryscio, Richard J.
2015-01-01
Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk (Figure 1). Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript we apply a Semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. PMID:24821001
Wei, Shaoceng; Kryscio, Richard J
2016-12-01
Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient cognitive states and death as a competing risk. Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript, we apply a semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. © The Author(s) 2014.
Realtime Multichannel System for Beat to Beat QT Interval Variability
NASA Technical Reports Server (NTRS)
Starc, Vito; Schlegel, Todd T.
2006-01-01
The measurement of beat-to-beat QT interval variability (QTV) shows clinical promise for identifying several types of cardiac pathology. However, until now, there has been no device capable of displaying, in real time on a beattobeat basis, changes in QTV in all 12 conventional leads in a continuously monitored patient. While several software programs have been designed to analyze QTV, heretofore, such programs have all involved only a few channels (at most) and/or have required laborious user interaction or offline calculations and postprocessing, limiting their clinical utility. This paper describes a PC-based ECG software program that in real time, acquires, analyzes and displays QTV and also PQ interval variability (PQV) in each of the eight independent channels that constitute the 12lead conventional ECG. The system also processes certain related signals that are derived from singular value decomposition and that help to reduce the overall effects of noise on the realtime QTV and PQV results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ojeda-Gonzalez, A.; Prestes, A.; Klausner, V.
Spatio-temporal entropy (STE) analysis is used as an alternative mathematical tool to identify possible magnetic cloud (MC) candidates. We analyze Interplanetary Magnetic Field (IMF) data using a time interval of only 10 days. We select a convenient data interval of 2500 records moving forward by 200 record steps until the end of the time series. For every data segment, the STE is calculated at each step. During an MC event, the STE reaches values close to zero. This extremely low value of STE is due to MC structure features. However, not all of the magnetic components in MCs have STEmore » values close to zero at the same time. For this reason, we create a standardization index (the so-called Interplanetary Entropy, IE, index). This index is a worthwhile effort to develop new tools to help diagnose ICME structures. The IE was calculated using a time window of one year (1999), and it has a success rate of 70% over other identifiers of MCs. The unsuccessful cases (30%) are caused by small and weak MCs. The results show that the IE methodology identified 9 of 13 MCs, and emitted nine false alarm cases. In 1999, a total of 788 windows of 2500 values existed, meaning that the percentage of false alarms was 1.14%, which can be considered a good result. In addition, four time windows, each of 10 days, are studied, where the IE method was effective in finding MC candidates. As a novel result, two new MCs are identified in these time windows.« less
Kaliszan, Michał
2013-09-01
This paper presents a verification of the thermodynamic model allowing an estimation of the time of death (TOD) by calculating the post mortem interval (PMI) based on a single eyeball temperature measurement at the death scene. The study was performed on 30 cases with known PMI, ranging from 1h 35min to 5h 15min, using pin probes connected to a high precision electronic thermometer (Dostmann-electronic). The measured eye temperatures ranged from 20.2 to 33.1°C. Rectal temperature was measured at the same time and ranged from 32.8 to 37.4°C. Ambient temperatures which ranged from -1 to 24°C, environmental conditions (still air to light wind) and the amount of hair on the head were also recorded every time. PMI was calculated using a formula based on Newton's law of cooling, previously derived and successfully tested in comprehensive studies on pigs and a few human cases. Thanks to both the significantly faster post mortem decrease of eye temperature and a residual or nonexistent plateau effect in the eye, as well as practically no influence of body mass, TOD in the human death cases could be estimated with good accuracy. The highest TOD estimation error during the post mortem intervals up to around 5h was 1h 16min, 1h 14min and 1h 03min, respectively in three cases among 30, while for the remaining 27 cases it was not more than 47min. The mean error for all 30 cases was ±31min. All that indicates that the proposed method is of quite good precision in the early post mortem period, with an accuracy of ±1h for a 95% confidence interval. On the basis of the presented method, TOD can be also calculated at the death scene with the use of a proposed portable electronic device (TOD-meter). Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Onishi, Natsuko; Kataoka, Masako; Kanao, Shotaro; Sagawa, Hajime; Iima, Mami; Nickel, Marcel Dominik; Toi, Masakazu; Togashi, Kaori
2018-01-01
To evaluate the feasibility of ultrafast dynamic contrast-enhanced (UF-DCE) magnetic resonance imaging (MRI) with compressed sensing (CS) for the separate identification of breast arteries/veins and perform temporal evaluations of breast arteries and veins with a focus on the association with ipsilateral cancers. Our Institutional Review Board approved this study with retrospective design. Twenty-five female patients who underwent UF-DCE MRI at 3T were included. UF-DCE MRI consisting of 20 continuous frames was acquired using a prototype 3D gradient-echo volumetric interpolated breath-hold sequence including a CS reconstruction: temporal resolution, 3.65 sec/frame; spatial resolution, 0.9 × 1.3 × 2.5 mm. Two readers analyzed 19 maximum intensity projection images reconstructed from subtracted images, separately identified breast arteries/veins and the earliest frame in which they were respectively visualized, and calculated the time interval between arterial and venous visualization (A-V interval) for each breast. In total, 49 breasts including 31 lesions (breast cancer, 16; benign lesion, 15) were identified. In 39 of the 49 breasts (breasts with cancers, 16; breasts with benign lesions, 10; breasts with no lesions, 13), both breast arteries and veins were separately identified. The A-V intervals for breasts with cancers were significantly shorter than those for breasts with benign lesions (P = 0.043) and no lesions (P = 0.007). UF-DCE MRI using CS enables the separate identification of breast arteries/veins. Temporal evaluations calculating the time interval between arterial and venous visualization might be helpful in the differentiation of ipsilateral breast cancers from benign lesions. 3 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018;47:97-104. © 2017 International Society for Magnetic Resonance in Medicine.
Wong, Ngai Sze; Wong, Ka Hing; Lee, Man Po; Tsang, Owen T Y; Chan, Denise P C; Lee, Shui Shan
2016-01-01
Undiagnosed infections accounted for the hidden proportion of HIV cases that have escaped from public health surveillance. To assess the population risk of HIV transmission, we estimated the undiagnosed interval of each known infection for constructing the HIV incidence curves. We used modified back-calculation methods to estimate the seroconversion year for each diagnosed patient attending any one of the 3 HIV specialist clinics in Hong Kong. Three approaches were used, depending on the adequacy of CD4 data: (A) estimating one's pre-treatment CD4 depletion rate in multilevel model;(B) projecting one's seroconversion year by referencing seroconverters' CD4 depletion rate; or (C) projecting from the distribution of estimated undiagnosed intervals in (B). Factors associated with long undiagnosed interval (>2 years) were examined in univariate analyses. Epidemic curves constructed from estimated seroconversion data were evaluated by modes of transmission. Between 1991 and 2010, a total of 3695 adult HIV patients were diagnosed. The undiagnosed intervals were derived from method (A) (28%), (B) (61%) and (C) (11%) respectively. The intervals ranged from 0 to 10 years, and were shortened from 2001. Heterosexual infection, female, Chinese and age >64 at diagnosis were associated with long undiagnosed interval. Overall, the peaks of the new incidence curves were reached 4-6 years ahead of reported diagnoses, while their contours varied by mode of transmission. Characteristically, the epidemic growth of heterosexual male and female declined after 1998 with slight rebound in 2004-2006, but that of MSM continued to rise after 1998. By determining the time of seroconversion, HIV epidemic curves could be reconstructed from clinical data to better illustrate the trends of new infections. With the increasing coverage of antiretroviral therapy, the undiagnosed interval can add to the measures for assessing HIV transmission risk in the population.
A validation of ground ambulance pre-hospital times modeled using geographic information systems
2012-01-01
Background Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. Methods The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. Results There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7–8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. Conclusions The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a non-US context. The preference for researchers should be to use actual EMS trip records from the proposed research study area. In the absence of EMS trip data researchers should determine which modeling assumptions more accurately reflect the EMS protocols across their study area. PMID:23033894
Long-term persistence of solar activity
NASA Technical Reports Server (NTRS)
Ruzmaikin, Alexander; Feynman, Joan; Robinson, Paul
1994-01-01
We examine the question of whether or not the non-periodic variations in solar activity are caused by a white-noise, random process. The Hurst exponent, which characterizes the persistence of a time series, is evaluated for the series of C-14 data for the time interval from about 6000 BC to 1950 AD. We find a constant Hurst exponent, suggesting that solar activity in the frequency range from 100 to 3000 years includes an important continuum component in addition to the well-known periodic variations. The value we calculate, H approximately 0.8, is significantly larger than the value of 0.5 that would correspond to variations produced by a white-noise process. This value is in good agreement with the results for the monthly sunspot data reported elsewhere, indicating that the physics that produces the continuum is a correlated random process and that it is the same type of process over a wide range of time interval lengths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-14
This package contains statistical routines for extracting features from multivariate time-series data which can then be used for subsequent multivariate statistical analysis to identify patterns and anomalous behavior. It calculates local linear or quadratic regression model fits to moving windows for each series and then summarizes the model coefficients across user-defined time intervals for each series. These methods are domain agnostic-but they have been successfully applied to a variety of domains, including commercial aviation and electric power grid data.
Integrating Security in Real-Time Embedded Systems
2017-04-26
b) detect any intrusions/a ttacks once tl1ey occur and (c) keep the overall system safe in the event of an attack. 4. Analysis and evaluation of...beyond), we expanded our work in both security integration and attack mechanisms, and worked on demonstrations and evaluations in hardware. Year I...scheduling for each busy interval w ith the calculated arrival time w indow. Step 1 focuses on the problem of finding the quanti ty of each task
Thermal control of low-pressure fractionation processes. [in basaltic magma solidification
NASA Technical Reports Server (NTRS)
Usselman, T. M.; Hodge, D. S.
1978-01-01
Thermal models detailing the solidification paths for shallow basaltic magma chambers (both open and closed systems) were calculated using finite-difference techniques. The total solidification time for closed chambers are comparable to previously published calculations; however, the temperature-time paths are not. These paths are dependent on the phase relations and the crystallinity of the system, because both affect the manner in which the latent heat of crystallization is distributed. In open systems, where a chamber would be periodically replenished with additional parental liquid, calculations indicate that the possibility is strong that a steady-state temperature interval is achieved near a major phase boundary. In these cases it is straightforward to analyze fractionation models of the basaltic liquid evolution and their corresponding cumulate sequences. This steady thermal fractionating state can be invoked to explain large amounts of erupted basalts of similar composition over long time periods from the same volcanic center and some rhythmically layered basic cumulate sequences.
NASA Astrophysics Data System (ADS)
Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro
It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.
Temporal properties of the myopic response to defocus in the guinea pig.
Leotta, Amelia J; Bowrey, Hannah E; Zeng, Guang; McFadden, Sally A
2013-05-01
Hyperopic defocus induces myopia in all species tested and is believed to underlie the progression of human myopia. We determined the temporal properties of the effects of hyperopic defocus in a mammalian eye. In Experiment 1, the rise and decay time of the responses elicited by hyperopic defocus were calculated in 111 guinea pigs by giving repeated episodes of monocular -4 D lens wear (from 5 to 6 days of age for 12 days) interspersed with various dark intervals. In Experiment 2, the decay time constant was calculated in 152 guinea pigs when repeated periods of monocular -5 D lens-wear (from 4 days of age for 7 days) were interrupted with free viewing periods of different lengths. At the end of the lens-wear period, ocular parameters were measured and time constants were calculated relative to the maximum response induced by continuous lens wear. When hyperopic defocus was experienced with dark intervals between episodes, the time required to induce 50% of the maximum achievable myopia and ocular elongation was at most 30 min. Saturated 1 h episodes took at least 22 h for refractive error and 31 h for ocular length, to decay to 50% of the maximum response. However, the decay was an order of magnitude faster when hyperopic defocus episodes were interrupted with a daily free viewing period, with only 36 min required to reduce relative myopia and ocular elongation by 50%. Hyperopic defocus causes myopia with brief exposures and is very long lasting in the absence of competing signals. However, this myopic response rapidly decays if interrupted by periods of 'normal viewing' at least 30 min in length, wherein ocular growth appears to be guided preferentially by the least amount of hyperopic defocus experienced. Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
INFLUENCES OF RESPONSE RATE AND DISTRIBUTION ON THE CALCULATION OF INTEROBSERVER RELIABILITY SCORES
Rolider, Natalie U.; Iwata, Brian A.; Bullock, Christopher E.
2012-01-01
We examined the effects of several variations in response rate on the calculation of total, interval, exact-agreement, and proportional reliability indices. Trained observers recorded computer-generated data that appeared on a computer screen. In Study 1, target responses occurred at low, moderate, and high rates during separate sessions so that reliability results based on the four calculations could be compared across a range of values. Total reliability was uniformly high, interval reliability was spuriously high for high-rate responding, proportional reliability was somewhat lower for high-rate responding, and exact-agreement reliability was the lowest of the measures, especially for high-rate responding. In Study 2, we examined the separate effects of response rate per se, bursting, and end-of-interval responding. Response rate and bursting had little effect on reliability scores; however, the distribution of some responses at the end of intervals decreased interval reliability somewhat, proportional reliability noticeably, and exact-agreement reliability markedly. PMID:23322930
NASA Astrophysics Data System (ADS)
Al-Hallaq, H. A.; Reft, C. S.; Roeske, J. C.
2006-03-01
The dosimetric effects of bone and air heterogeneities in head and neck IMRT treatments were quantified. An anthropomorphic RANDO phantom was CT-scanned with 16 thermoluminescent dosimeter (TLD) chips placed in and around the target volume. A standard IMRT plan generated with CORVUS was used to irradiate the phantom five times. On average, measured dose was 5.1% higher than calculated dose. Measurements were higher by 7.1% near the heterogeneities and by 2.6% in tissue. The dose difference between measurement and calculation was outside the 95% measurement confidence interval for six TLDs. Using CORVUS' heterogeneity correction algorithm, the average difference between measured and calculated doses decreased by 1.8% near the heterogeneities and by 0.7% in tissue. Furthermore, dose differences lying outside the 95% confidence interval were eliminated for five of the six TLDs. TLD doses recalculated by Pinnacle3's convolution/superposition algorithm were consistently higher than CORVUS doses, a trend that matched our measured results. These results indicate that the dosimetric effects of air cavities are larger than those of bone heterogeneities, thereby leading to a higher delivered dose compared to CORVUS calculations. More sophisticated algorithms such as convolution/superposition or Monte Carlo should be used for accurate tailoring of IMRT dose in head and neck tumours.
Dexter, Franklin; Marcon, Eric; Aker, John; Epstein, Richard H
2009-09-01
More personnel are needed to turn over operating rooms (ORs) promptly when there are more simultaneous turnovers. Anesthesia and/or OR information management system data can be analyzed statistically to quantify simultaneous turnovers to evaluate whether to add an additional turnover team. Data collected for each case at a six OR facility were room, date of surgery, time of patient entry into the OR, and time of patient exit from the OR. The number of simultaneous turnovers was calculated for each 1 min of 122 4-wk periods. Our end point was the reduction in the daily minutes of simultaneous turnovers exceeding the number of teams caused by the addition of a team. Increasing from two turnover teams to three teams reduced the mean daily minutes of simultaneous turnovers exceeding the numbers of teams by 19 min. The ratio of 19 min to 8 h valued the time of extra personnel as 4.0% of the time of OR staff, surgeons, and anesthesia providers. Validity was suggested by other methods of analyses also suggesting staffing for three simultaneous turnovers. Discrete-event simulation showed that the reduction in daily minutes of turnover times from the addition of a team would likely match or exceed the reduction in the daily minutes of simultaneous turnovers exceeding the numbers of teams. Confidence intervals for daily minutes of turnover times achieved by increasing from two to three teams were calculated using successive 4-wk periods. The distribution was sufficiently close to normal that accurate confidence intervals could be calculated using Student's t distribution (Lilliefors' test P = 0.58). Analysis generally should use 13 4-wk periods as increasing the number of periods from 6 to 13 significantly reduced the coefficient of variation of the averages but not increasing the number of periods from 6 to 9 or from 9 to 13. The number of simultaneous turnovers can be calculated for each 1 min over 1 yr. The reduction in the daily minutes of simultaneous turnovers exceeding the number of teams achieved by the addition of a turnover team can be averaged over the year's 13 4-wk periods to provide insight as to the value (or not) of adding an additional team.
Studenic, Paul; Stamm, Tanja; Smolen, Josef S; Aletaha, Daniel
2016-01-01
Patient-reported outcomes (PROs) such as pain, patient global assessment (PGA) and fatigue are regularly assessed in RA patients. In the present study, we aimed to explore the reliability and smallest detectable differences (SDDs) of these PROs, and whether the time between assessments has an impact on reliability. Forty RA patients on stable treatment reported the three PROs daily over two subsequent months. We assessed the reliability of these measures by calculating intraclass correlation coefficients (ICCs) and the SDDs for 1-, 7-, 14- and 28-day test-retest intervals. Overall, SDD and ICC were 25 mm and 0.67 for pain, 25 mm and 0.71 for PGA and 30 mm and 0.66 for fatigue, respectively. SDD was higher with longer time period between assessments, ranging from 19 mm (1-day intervals) to 30 mm (28-day intervals) for pain, 19 to 33 mm for PGA, and 26 to 34 mm for fatigue; correspondingly, ICC was smaller with longer intervals, and ranged between the 1- and the 28-day interval from 0.80 to 0.50 for pain, 0.83 to 0.57 for PGA and 0.76 to 0.58 for fatigue. The baseline simplified disease activity index did not have any influence on reliability. Lower baseline PRO scores led to smaller SDDs. Reliability of pain, PGA and fatigue measurements is dependent on the tested time interval and the baseline levels. The relatively high SDDs, even for patients in the lowest tertiles of their PROs, indicate potential issues for assessment of the presence of remission. © The Author 2015. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Morton, Michael J; Laffoon, Susan W
2008-06-01
This study extends the market mapping concept introduced by Counts et al. (Counts, M.E., Hsu, F.S., Tewes, F.J., 2006. Development of a commercial cigarette "market map" comparison methodology for evaluating new or non-conventional cigarettes. Regul. Toxicol. Pharmacol. 46, 225-242) to include both temporal cigarette and testing variation and also machine smoking with more intense puffing parameters, as defined by the Massachusetts Department of Public Health (MDPH). The study was conducted over a two year period and involved a total of 23 different commercial cigarette brands from the U.S. marketplace. Market mapping prediction intervals were developed for 40 mainstream cigarette smoke constituents and the potential utility of the market map as a comparison tool for new brands was demonstrated. The over-time character of the data allowed for the variance structure of the smoke constituents to be more completely characterized than is possible with one-time sample data. The variance was partitioned among brand-to-brand differences, temporal differences, and the remaining residual variation using a mixed random and fixed effects model. It was shown that a conventional weighted least squares model typically gave similar prediction intervals to those of the more complicated mixed model. For most constituents there was less difference in the prediction intervals calculated from over-time samples and those calculated from one-time samples than had been anticipated. One-time sample maps may be adequate for many purposes if the user is aware of their limitations. Cigarette tobacco fillers were analyzed for nitrate, nicotine, tobacco-specific nitrosamines, ammonia, chlorogenic acid, and reducing sugars. The filler information was used to improve predicting relationships for several of the smoke constituents, and it was concluded that the effects of filler chemistry on smoke chemistry were partial explanations of the observed brand-to-brand variation.
EIVAN - AN INTERACTIVE ORBITAL TRAJECTORY PLANNING TOOL
NASA Technical Reports Server (NTRS)
Brody, A. R.
1994-01-01
The Interactive Orbital Trajectory planning Tool, EIVAN, is a forward looking interactive orbit trajectory plotting tool for use with Proximity Operations (operations occurring within a one kilometer sphere of the space station) and other maneuvers. The result of vehicle burns on-orbit is very difficult to anticipate because of non-linearities in the equations of motion governing orbiting bodies. EIVAN was developed to plot resulting trajectories, to provide a better comprehension of orbital mechanics effects, and to help the user develop heuristics for onorbit mission planning. EIVAN comprises a worksheet and a chart from Microsoft Excel on a Macintosh computer. The orbital path for a user-specified time interval is plotted given operator burn inputs. Fuel use is also calculated. After the thrust parameters (magnitude, direction, and time) are input, EIVAN plots the resulting trajectory. Up to five burns may be inserted at any time in the mission. Twenty data points are plotted for each burn and the time interval can be varied to accommodate any desired time frame or degree of resolution. Since the number of data points for each burn is constant, the mission duration can be increased or decreased by increasing or decreasing the time interval. The EIVAN program runs with Microsoft's Excel for execution on a Macintosh running Macintosh OS. A working knowledge of Excel is helpful, but not imperative, for interacting with EIVAN. The program was developed in 1989.
ETARA PC version 3.3 user's guide: Reliability, availability, maintainability simulation model
NASA Technical Reports Server (NTRS)
Hoffman, David J.; Viterna, Larry A.
1991-01-01
A user's manual describing an interactive, menu-driven, personal computer based Monte Carlo reliability, availability, and maintainability simulation program called event time availability reliability (ETARA) is discussed. Given a reliability block diagram representation of a system, ETARA simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair intervals as a function of exponential and/or Weibull distributions. Availability parameters such as equivalent availability, state availability (percentage of time as a particular output state capability), continuous state duration and number of state occurrences can be calculated. Initial spares allotment and spares replenishment on a resupply cycle can be simulated. The number of block failures are tabulated both individually and by block type, as well as total downtime, repair time, and time waiting for spares. Also, maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can be calculated over a cumulative period of time or at specific points in time.
WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.
Grech, Victor
2018-03-01
The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.
d'plus: A program to calculate accuracy and bias measures from detection and discrimination data.
Macmillan, N A; Creelman, C D
1997-01-01
The program d'plus calculates accuracy (sensitivity) and response-bias parameters using Signal Detection Theory. Choice Theory, and 'nonparametric' models. is is appropriate for data from one-interval, two- and three-interval forced-choice, same different, ABX, and oddity experimental paradigms.
An IDS Alerts Aggregation Algorithm Based on Rough Set Theory
NASA Astrophysics Data System (ADS)
Zhang, Ru; Guo, Tao; Liu, Jianyi
2018-03-01
Within a system in which has been deployed several IDS, a great number of alerts can be triggered by a single security event, making real alerts harder to be found. To deal with redundant alerts, we propose a scheme based on rough set theory. In combination with basic concepts in rough set theory, the importance of attributes in alerts was calculated firstly. With the result of attributes importance, we could compute the similarity of two alerts, which will be compared with a pre-defined threshold to determine whether these two alerts can be aggregated or not. Also, time interval should be taken into consideration. Allowed time interval for different types of alerts is computed individually, since different types of alerts may have different time gap between two alerts. In the end of this paper, we apply proposed scheme on DAPRA98 dataset and the results of experiment show that our scheme can efficiently reduce the redundancy of alerts so that administrators of security system could avoid wasting time on useless alerts.
Evolution of the orbit of asteroid 4179 Toutatis over 11,550 years.
NASA Astrophysics Data System (ADS)
Zausaev, A. F.; Pushkarev, A. N.
1994-05-01
The Everhart method is used to study evolution of the orbit of the asteroid 4179 Toutatis, a member of the Apollo group, over the time period 9300 B.C. to 2250 A.D. Minimum asteroid-Earth distances during the evolution process are calculated. It is shown that the asteroid presents no danger to the Earth over the interval studied.
NASA Astrophysics Data System (ADS)
Kucherov, A. N.; Makashev, N. K.; Ustinov, E. V.
1994-02-01
A procedure is proposed for numerical modeling of instantaneous and averaged (over various time intervals) distant-point-source images perturbed by a turbulent atmosphere that moves relative to the radiation receiver. Examples of image calculations under conditions of the significant effect of atmospheric turbulence in an approximation of geometrical optics are presented and analyzed.
Gravitational microlensing - The effect of random motion of individual stars in the lensing galaxy
NASA Technical Reports Server (NTRS)
Kundic, Tomislav; Wambsganss, Joachim
1993-01-01
We investigate the influence of random motion of individual stars in the lensing galaxy on the light curve of a gravitationally lensed background quasar. We compare this with the effects of the transverse motion of the galaxy. We find that three-dimensional random motion of stars with a velocity dispersion sigma in each dimension is more effective in producing 'peaks' in a microlensed light curve by a factor a about 1.3 than motion of the galaxy with a transverse velocity v(t) = sigma. This effectiveness parameter a seems to depend only weakly on the surface mass density. With an assumed transverse velocity of v(t) = 600 km/s of the galaxy lensing the QSO 2237+0305 and a measured velocity dispersion of sigma = 215 km/s, the expected rate of maxima in the light curves calculated for bulk motion alone has to be increased by about 10 percent due to the random motion of stars. As a consequence, the average time interval Delta t between two high-magnification events is smaller than the time interval Delta(t) bulk, calculated for bulk motion alone, Delta t about 0.9 Delta(t) bulk.
Does controlling for biological maturity improve physical activity tracking?
Erlandson, Marta C; Sherar, Lauren B; Mosewich, Amber D; Kowalski, Kent C; Bailey, Donald A; Baxter-Jones, Adam D G
2011-05-01
Tracking of physical activity through childhood and adolescence tends to be low. Variation in the timing of biological maturation within youth of the same chronological age (CA) might affect participation in physical activity and may partially explain the low tracking. To examine the stability of physical activity over time from childhood to late adolescence when aligned on CA and biological age (BA). A total of 91 males and 96 females aged 8-15 yr from the Saskatchewan Pediatric Bone Mineral Accrual Study (PBMAS) were assessed annually for 8 yr. BA was calculated as years from age at peak height velocity. Physical activity was assessed using the Physical Activity Questionnaire for Children/Adolescents. Tracking was analyzed using intraclass correlations for both CA and BA (2-yr groupings). To be included in the analysis, an individual required a measure at both time points within an interval; however, not all individuals were present at all tracking intervals. Physical activity tracking by CA 2-yr intervals were, in general, moderate in males (r=0.42-0.59) and females (r=0.43-0.44). However, the 9- to 11-yr CA interval was low and nonsignificant (r=0.23-0.30). Likewise, tracking of physical activity by BA 2-yr intervals was moderate to high in males (r=0.44-0.60) and females (r=0.39-0.62). Accounting for differences in the timing of biological maturity had little effect on tracking physical activity. However, point estimates for tracking are higher in early adolescence in males and to a greater extent in females when aligned by BA versus CA. This suggests that maturity may be more important in physical activity participation in females than males. © 2011 by the American College of Sports Medicine
Detection of abnormal item based on time intervals for recommender systems.
Gao, Min; Yuan, Quan; Ling, Bin; Xiong, Qingyu
2014-01-01
With the rapid development of e-business, personalized recommendation has become core competence for enterprises to gain profits and improve customer satisfaction. Although collaborative filtering is the most successful approach for building a recommender system, it suffers from "shilling" attacks. In recent years, the research on shilling attacks has been greatly improved. However, the approaches suffer from serious problem in attack model dependency and high computational cost. To solve the problem, an approach for the detection of abnormal item is proposed in this paper. In the paper, two common features of all attack models are analyzed at first. A revised bottom-up discretized approach is then proposed based on time intervals and the features for the detection. The distributions of ratings in different time intervals are compared to detect anomaly based on the calculation of chi square distribution (χ(2)). We evaluated our approach on four types of items which are defined according to the life cycles of these items. The experimental results show that the proposed approach achieves a high detection rate with low computational cost when the number of attack profiles is more than 15. It improves the efficiency in shilling attacks detection by narrowing down the suspicious users.
Orbital Evolution of Jupiter-Family Comets
NASA Technical Reports Server (NTRS)
Ipatov, S. I.; Mather, J. S.; Oegerle, William R. (Technical Monitor)
2002-01-01
We investigated the evolution for periods of at least 5-10 Myr of 2500 Jupiter-crossing objects (JCOs) under the gravitational influence of all planets, except for Mercury and Pluto (without dissipative factors). In the first series we considered N=2000 orbits near the orbits of 30 real Jupiter-family comets with period less than 10 yr, and in the second series we took 500 orbits close to the orbit of Comet 10P Tempel 2. We calculated the probabilities of collisions of objects with the terrestrial planets, using orbital elements obtained with a step equal to 500 yr and then summarized the results for all time intervals and all bodies, obtaining the total probability P(sub sigma) of collisions with a planet and the total time interval T(sub sigma) during which perihelion distance of bodies was less than a semimajor axis of the planet. The values of P = 10(exp 6)P(sub sigma)/N and T = T(sub sigma)/1000 yr are presented in Table together with the ratio r of the total time interval when orbits were of Apollo type (at e less than 0.999) to that of Amor type.
Geographic Access to US Neurocritical Care Units Registered with the Neurocritical Care Society
Shutter, Lori A.; Branas, Charles C.; Adeoye, Opeolu; Albright, Karen C.; Carr, Brendan G.
2018-01-01
Background Neurocritical care provides multidisciplinary, specialized care to critically ill neurological patients, yet an understanding of the proportion of the population able to rapidly access specialized Neurocritical Care Units (NCUs) in the United States is currently unknown. We sought to quantify geographic access to NCUs by state, division, region, and for the US as a whole. In addition, we examined how mode of transportation (ground or air ambulance), and prehospital transport times affected population access to NCUs. Methods Data were obtained from the Neurocritical Care Society (NCS), US Census Bureau and the Atlas and Database of Air Medical Services. Empirically derived prehospital time intervals and validated models estimating prehospital ground and air travel times were used to calculate total prehospital times. A discrete total prehospital time interval was calculated for each small unit of geographic analysis (block group) and block group populations were summed to determine the proportion of Americans able to reach a NCU within discrete time intervals (45, 60, 75, and 90 min). Results are presented for different geographies and for different modes of prehospital transport (ground or air ambulance). Results There are 73 NCUs in the US using ground transportation alone, 12.8, 20.5, 27.4, and 32.6% of the US population are within 45, 60, 75, and 90 min of an NCU, respectively. Use of air ambulances increases access to 36.8, 50.4, 60, and 67.3 within 45, 60, 75, and 90 min, respectively. The Northeast has the highest access rates in the US using ground ambulances and for 45, 60, and 75 min transport times with the addition of air ambulances. At 90 min, the West has the highest access rate. The Southern region has the lowest ground and air access to NCUs access rates for all transport times. Conclusions Using NCUs registered with the NCS, current geographic access to NCUs is limited in the US, and geographic disparities in access to care exist. While additional NCUs may exist beyond those identified by the NCS database, we identify geographies with limited access to NCUs and offer a population-based planning perspective on the further development of the US neurocritical care system. PMID:22045246
Osteosarcoma following tibial plateau leveling osteotomy in dogs: 29 cases (1997-2011).
Selmic, Laura E; Ryan, Stewart D; Boston, Sarah E; Liptak, Julius M; Culp, William T N; Sartor, Angela J; Prpich, Cassandra Y; Withrow, Stephen J
2014-05-01
To determine the signalment, tibial plateau leveling osteotomy (TPLO) plate type, clinical staging information, treatment, and oncological outcome in dogs that developed osteosarcoma at the proximal aspect of the tibia following TPLO and to calculate the interval between TPLO and osteosarcoma diagnosis. Multi-institutional retrospective case series. 29 dogs. Medical records from 8 participating institutions were searched for dogs that developed osteosarcoma (confirmed through cytologic or histologic evaluation) at previous TPLO sites. Signalment, TPLO details, staging tests, treatment data, and outcome information were recorded. Descriptive statistics were calculated, and disease-free intervals and survival times were evaluated by means of Kaplan-Meier analysis. 29 dogs met the inclusion criteria. The mean age was 9.2 years and mean weight was 45.1 kg (99.2 lb) at the time of osteosarcoma diagnosis. Most dogs had swelling over the proximal aspect of the tibia (17/21) and lameness of the affected limb (28/29). The mean interval between TPLO and osteosarcoma diagnosis was 5.3 years. One type of cast stainless steel TPLO plate was used in most (18) dogs; the remaining dogs had received plates of wrought stainless steel (n = 4) or unrecorded type (7). Twenty-three of 29 dogs underwent treatment for osteosarcoma. Median survival time for 10 dogs that underwent amputation of the affected limb and received ≥ 1 chemotherapeutic treatment was 313 days. Results supported that osteosarcoma should be a differential diagnosis for dogs with a history of TPLO that later develop lameness and swelling at the previous surgical site. Oncological outcome following amputation and chemotherapy appeared to be similar to outcomes previously reported for dogs with appendicular osteosarcoma.
Koerbin, G; Cavanaugh, J A; Potter, J M; Abhayaratna, W P; West, N P; Glasgow, N; Hawkins, C; Armbruster, D; Oakman, C; Hickman, P E
2015-02-01
Development of reference intervals is difficult, time consuming, expensive and beyond the scope of most laboratories. The Aussie Normals study is a direct a priori study to determine reference intervals in healthy Australian adults. All volunteers completed a health and lifestyle questionnaire and exclusion was based on conditions such as pregnancy, diabetes, renal or cardiovascular disease. Up to 91 biochemical analyses were undertaken on a variety of analytical platforms using serum samples collected from 1856 volunteers. We report on our findings for 40 of these analytes and two calculated parameters performed on the Abbott ARCHITECTci8200/ci16200 analysers. Not all samples were analysed for all assays due to volume requirements or assay/instrument availability. Results with elevated interference indices and those deemed unsuitable after clinical evaluation were removed from the database. Reference intervals were partitioned based on the method of Harris and Boyd into three scenarios, combined gender, males and females and age and gender. We have performed a detailed reference interval study on a healthy Australian population considering the effects of sex, age and body mass. These reference intervals may be adapted to other manufacturer's analytical methods using method transference.
Six Sessions of Sprint Interval Training Improves Running Performance in Trained Athletes.
Koral, Jerome; Oranchuk, Dustin J; Herrera, Roberto; Millet, Guillaume Y
2018-03-01
Koral, J, Oranchuk, DJ, Herrera, R, and Millet, GY. Six sessions of sprint interval training improves running performance in trained athletes. J Strength Cond Res 32(3): 617-623, 2018-Sprint interval training (SIT) is gaining popularity with endurance athletes. Various studies have shown that SIT allows for similar or greater endurance, strength, and power performance improvements than traditional endurance training but demands less time and volume. One of the main limitations in SIT research is that most studies were performed in a laboratory using expensive treadmills or ergometers. The aim of this study was to assess the performance effects of a novel short-term and highly accessible training protocol based on maximal shuttle runs in the field (SIT-F). Sixteen (12 male, 4 female) trained trail runners completed a 2-week procedure consisting of 4-7 bouts of 30 seconds at maximal intensity interspersed by 4 minutes of recovery, 3 times a week. Maximal aerobic speed (MAS), time to exhaustion at 90% of MAS before test (Tmax at 90% MAS), and 3,000-m time trial (TT3000m) were evaluated before and after training. Data were analyzed using a paired samples t-test, and Cohen's (d) effect sizes were calculated. Maximal aerobic speed improved by 2.3% (p = 0.01, d = 0.22), whereas peak power (PP) and mean power (MP) increased by 2.4% (p = 0.009, d = 0.33) and 2.8% (p = 0.002, d = 0.41), respectively. TT3000m was 6% shorter (p < 0.001, d = 0.35), whereas Tmax at 90% MAS was 42% longer (p < 0.001, d = 0.74). Sprint interval training in the field significantly improved the 3,000-m run, time to exhaustion, PP, and MP in trained trail runners. Sprint interval training in the field is a time-efficient and cost-free means of improving both endurance and power performance in trained athletes.
Ozawa, Koya; Funabashi, Nobusada; Nishi, Takeshi; Takahara, Masayuki; Fujimoto, Yoshihide; Kamata, Tomoko; Kobayashi, Yoshio
2016-08-15
This study evaluated the post-systolic strain index (PSI), and the time interval between aortic valve closure (AVC) and regional peak longitudinal strain (PLS), measured by transthoracic echocardiography (TTE), for detection of left ventricular (LV) myocardial ischemic segments confirmed by invasive fractional flow reserve (FFR). 39 stable patients (32 males; 65.8±11.9years) with 46 coronary arteries at ≥50% stenosis on invasive coronary angiography underwent 2D speckle tracking TTE (Vivid E9, GE Healthcare) and invasive FFR measurements. PSI, AVC and regional PLS in each LV segment were calculated. FFR ≤0.80 was detected in 27 LV segments. There were no significant differences between segments supplied by FFR ≤0.80 and FFR >0.80 vessels in either PSI or the time interval between AVC and regional PLS. To identify LV segments±FFR ≤0.80, the receiver operator characteristic (ROC) curves for PSI, and the time interval between AVC and regional PLS had areas under the curve (AUC) values of 0.58 and 0.57, respectively, with best cut-off points of 12% (sensitivity 70.4%, specificity 57.9%) and 88ms (sensitivity 70.4%, specificity 52.6%), respectively, but the AUCs were not statistically significant. In stable coronary artery disease patients with ≥50% coronary artery stenosis, measurement of PSI, and the time interval between AVC and regional PLS, on resting TTE, enabled the identification of LV segments with FFR ≤0.80 using each appropriate threshold for PSI, and the time interval between AVC and regional PLS, with reasonable diagnostic accuracy. However, the AUC values were not statistically significant. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Six Sessions of Sprint Interval Training Improves Running Performance in Trained Athletes
Oranchuk, Dustin J.; Herrera, Roberto; Millet, Guillaume Y.
2018-01-01
Abstract Koral, J, Oranchuk, DJ, Herrera, R, and Millet, GY. Six sessions of sprint interval training improves running performance in trained athletes. J Strength Cond Res 32(3): 617–623, 2018—Sprint interval training (SIT) is gaining popularity with endurance athletes. Various studies have shown that SIT allows for similar or greater endurance, strength, and power performance improvements than traditional endurance training but demands less time and volume. One of the main limitations in SIT research is that most studies were performed in a laboratory using expensive treadmills or ergometers. The aim of this study was to assess the performance effects of a novel short-term and highly accessible training protocol based on maximal shuttle runs in the field (SIT-F). Sixteen (12 male, 4 female) trained trail runners completed a 2-week procedure consisting of 4–7 bouts of 30 seconds at maximal intensity interspersed by 4 minutes of recovery, 3 times a week. Maximal aerobic speed (MAS), time to exhaustion at 90% of MAS before test (Tmax at 90% MAS), and 3,000-m time trial (TT3000m) were evaluated before and after training. Data were analyzed using a paired samples t-test, and Cohen's (d) effect sizes were calculated. Maximal aerobic speed improved by 2.3% (p = 0.01, d = 0.22), whereas peak power (PP) and mean power (MP) increased by 2.4% (p = 0.009, d = 0.33) and 2.8% (p = 0.002, d = 0.41), respectively. TT3000m was 6% shorter (p < 0.001, d = 0.35), whereas Tmax at 90% MAS was 42% longer (p < 0.001, d = 0.74). Sprint interval training in the field significantly improved the 3,000-m run, time to exhaustion, PP, and MP in trained trail runners. Sprint interval training in the field is a time-efficient and cost-free means of improving both endurance and power performance in trained athletes. PMID:29076961
NASA Astrophysics Data System (ADS)
Chu, F.; Haines, P.; Hudson, M.; Kress, B.; Freidel, R.; Kanekal, S.
2007-12-01
Work is underway by several groups to quantify diffusive radial transport of radiation belt electrons, including a model for pitch angle scattering losses to the atmosphere. The radial diffusion model conserves the first and second adiabatic invariants and breaks the third invariant. We have developed a radial diffusion code which uses the Crank Nicholson method with a variable outer boundary condition. For the radial diffusion coefficient, DLL, we have several choices, including the Brautigam and Albert (JGR, 2000) diffusion coefficient parameterized by Kp, which provides an ad hoc measure of the power level at ULF wave frequencies in the range of electron drift (mHz), breaking the third invariant. Other diffusion coefficient models are Kp-independent, fixed in time but explicitly dependent on the first invariant, or energy at a fixed L, such as calculated by Elkington et al. (JGR, 2003) and Perry et al. (JGR, 2006) based on ULF wave model fields. We analyzed three periods of electron flux and phase space density (PSD) enhancements inside of geosynchronous orbit: March 31 - May 31, 1991, and July 2004 and Nov 2004 storm intervals. The radial diffusion calculation is initialized with a computed phase space density profile for the 1991 interval using differential flux values from the CRRES High Energy Electron Fluxmeter instrument, covering 0.65 - 7.5 MeV. To calculate the initial phase space density, we convert Roederer L* to McIlwain's L- parameter using the ONERA-DESP program. A time averaged model developed by Vampola1 from the entire 14 month CRRES data set is applied to the July 2004 and Nov 2004 storms. The online CRESS data for specific orbits and the Vampola-model flux are both expressed in McIlwain L-shell, while conversion to L* conserves phase space density in a distorted non-dipolar magnetic field model. A Tsyganenko (T04) magnetic field model is used for conversion between L* and L. The outer boundary PSD is updated using LANL GEO satellite fluxes. After calculating the phase space density time evolution for the two storms and post-injection interval (March 31 - May 31, 1991), we compare results with SAMPEX measurements. A better match with SAMPEX measurements is obtained with a variable outer boundary, also with a Kp-dependent diffusion coefficient, and finally with an energy and L-dependent loss term (Summers et al., JGR, 2004), than with a time-independent diffusion coefficient and a simple Kp-parametrized loss rate and location of the plasmapause. Addition of a varying outer boundary which incorporates measured fluxes at geosynchronous orbit using L* has the biggest effect of the three parametrized variations studied. 1Vampola, A.L., 1996, The ESA Outer Zone Electron Model Update, Environment Modelling for Spaced-based Applications, ESA SP-392, ESTEC, Nordwijk, NL, pp. 151-158, W. Burke and T.-D. Guyenne, eds.
SPA- STATISTICAL PACKAGE FOR TIME AND FREQUENCY DOMAIN ANALYSIS
NASA Technical Reports Server (NTRS)
Brownlow, J. D.
1994-01-01
The need for statistical analysis often arises when data is in the form of a time series. This type of data is usually a collection of numerical observations made at specified time intervals. Two kinds of analysis may be performed on the data. First, the time series may be treated as a set of independent observations using a time domain analysis to derive the usual statistical properties including the mean, variance, and distribution form. Secondly, the order and time intervals of the observations may be used in a frequency domain analysis to examine the time series for periodicities. In almost all practical applications, the collected data is actually a mixture of the desired signal and a noise signal which is collected over a finite time period with a finite precision. Therefore, any statistical calculations and analyses are actually estimates. The Spectrum Analysis (SPA) program was developed to perform a wide range of statistical estimation functions. SPA can provide the data analyst with a rigorous tool for performing time and frequency domain studies. In a time domain statistical analysis the SPA program will compute the mean variance, standard deviation, mean square, and root mean square. It also lists the data maximum, data minimum, and the number of observations included in the sample. In addition, a histogram of the time domain data is generated, a normal curve is fit to the histogram, and a goodness-of-fit test is performed. These time domain calculations may be performed on both raw and filtered data. For a frequency domain statistical analysis the SPA program computes the power spectrum, cross spectrum, coherence, phase angle, amplitude ratio, and transfer function. The estimates of the frequency domain parameters may be smoothed with the use of Hann-Tukey, Hamming, Barlett, or moving average windows. Various digital filters are available to isolate data frequency components. Frequency components with periods longer than the data collection interval are removed by least-squares detrending. As many as ten channels of data may be analyzed at one time. Both tabular and plotted output may be generated by the SPA program. This program is written in FORTRAN IV and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 142K (octal) of 60 bit words. This core requirement can be reduced by segmentation of the program. The SPA program was developed in 1978.
Velocity Models of the Sedimentary Cover and Acoustic Basement, Central Arctic
NASA Astrophysics Data System (ADS)
Bezumov, D. V.; Butsenko, V.
2017-12-01
As the part of the Russian Federation Application on the Extension of the outer limit of the continental shelf in the Arctic Ocean to the Commission for the limits of the continental shelf the regional 2D seismic reflection and sonobuoy data was obtained in 2011, 2012 and 2014 years. Structure and thickness of the sedimentary cover and acoustic basement of the Central Arctic ocean can be refined due to this data. "VNIIOkeangeologia" created a methodology for matching 2D velocity model of the sedimentary cover based on vertical velocity spectrum calculated from wide-angle reflection sonobuoy data and the results of ray tracing of reflected and refracted waves. Matched 2D velocity models of the sedimentary cover in the Russian part of the Arctic Ocean were computed along several seismic profiles (see Figure). Figure comments: a) vertical velocity spectrum calculated from wide-angle reflection sonobuoy data. RMS velocity curve was picked in accordance with interpreted MCS section. Interval velocities within sedimentary units are shown. Interval velocities from Seiswide model are shown in brackets.b) interpreted sonobuoy record with overlapping of time-distance curves calculated by ray-tracing modelling.c) final depth velocity model specified by means of Seiswide software.
Humphreys, Michael K; Panacek, Edward; Green, William; Albers, Elizabeth
2013-03-01
Protocols for determining postmortem submersion interval (PMSI) have long been problematic for forensic investigators due to the wide variety of factors affecting the rate of decomposition of submerged carrion. Likewise, it has been equally problematic for researchers to develop standardized experimental protocols to monitor underwater decomposition without artificially affecting the decomposition rate. This study compares two experimental protocols: (i) underwater in situ evaluation with photographic documentation utilizing the Heaton et al. total aquatic decomposition (TAD) score and (ii) weighing the carrion before and after submersion. Complete forensic necropsies were performed as a control. Perinatal piglets were used as human analogs. The results of this study indicate that in order to objectively measure decomposition over time, the human analog should be examined at depth using the TAD scoring system rather than utilizing a carrion weight evaluation. The acquired TAD score can be used to calculate an approximate PMSI. © 2012 American Academy of Forensic Sciences.
Delayed neutron spectral data for Hansen-Roach energy group structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, J.M.; Spriggs, G.D.
A detailed knowledge of delayed neutron spectra is important in reactor physics. It not only allows for an accurate estimate of the effective delayed neutron fraction {beta}{sub eff} but also is essential to calculating important reactor kinetic parameters, such as effective group abundances and the ratio of {beta}{sub eff} to the prompt neutron generation time. Numerous measurements of delayed neutron spectra for various delayed neutron precursors have been performed and reported in the literature. However, for application in reactor physics calculations, these spectra are usually lumped into one of the traditional six groups of delayed neutrons in accordance to theirmore » half-lives. Subsequently, these six-group spectra are binned into energy intervals corresponding to the energy intervals of a chosen nuclear cross-section set. In this work, the authors present a set of delayed neutron spectra that were formulated specifically to match Keepin`s six-group parameters and the 16-energy-group Hansen-Roach cross sections.« less
Lott, B.; Escande, L.; Larsson, S.; ...
2012-07-19
Here, we present a method enabling the creation of constant-uncertainty/constant-significance light curves with the data of the Fermi-Large Area Telescope (LAT). The adaptive-binning method enables more information to be encapsulated within the light curve than with the fixed-binning method. Although primarily developed for blazar studies, it can be applied to any sources. Furthermore, this method allows the starting and ending times of each interval to be calculated in a simple and quick way during a first step. The reported mean flux and spectral index (assuming the spectrum is a power-law distribution) in the interval are calculated via the standard LATmore » analysis during a second step. In the absence of major caveats associated with this method Monte-Carlo simulations have been established. We present the performance of this method in determining duty cycles as well as power-density spectra relative to the traditional fixed-binning method.« less
Continuous-time interval model identification of blood glucose dynamics for type 1 diabetes
NASA Astrophysics Data System (ADS)
Kirchsteiger, Harald; Johansson, Rolf; Renard, Eric; del Re, Luigi
2014-07-01
While good physiological models of the glucose metabolism in type 1 diabetic patients are well known, their parameterisation is difficult. The high intra-patient variability observed is a further major obstacle. This holds for data-based models too, so that no good patient-specific models are available. Against this background, this paper proposes the use of interval models to cover the different metabolic conditions. The control-oriented models contain a carbohydrate and insulin sensitivity factor to be used for insulin bolus calculators directly. Available clinical measurements were sampled on an irregular schedule which prompts the use of continuous-time identification, also for the direct estimation of the clinically interpretable factors mentioned above. An identification method is derived and applied to real data from 28 diabetic patients. Model estimation was done on a clinical data-set, whereas validation results shown were done on an out-of-clinic, everyday life data-set. The results show that the interval model approach allows a much more regular estimation of the parameters and avoids physiologically incompatible parameter estimates.
An Element of Determinism in a Stochastic Flagellar Motor Switch
Xie, Li; Altindal, Tuba; Wu, Xiao-Lun
2015-01-01
Marine bacterium Vibrio alginolyticus uses a single polar flagellum to navigate in an aqueous environment. Similar to Escherichia coli cells, the polar flagellar motor has two states; when the motor is counter-clockwise, the cell swims forward and when the motor is clockwise, the cell swims backward. V. alginolyticus also incorporates a direction randomization step at the start of the forward swimming interval by flicking its flagellum. To gain an understanding on how the polar flagellar motor switch is regulated, distributions of the forward Δf and backward Δb intervals are investigated herein. We found that the steady-state probability density functions, P(Δf) and P(Δb), of freely swimming bacteria are strongly peaked at a finite time, suggesting that the motor switch is not Poissonian. The short-time inhibition is sufficiently strong and long lasting, i.e., several hundred milliseconds for both intervals, which is readily observed and characterized. Treating motor reversal dynamics as a first-passage problem, which results from conformation fluctuations of the motor switch, we calculated P(Δf) and P(Δb) and found good agreement with the measurements. PMID:26554590
An Element of Determinism in a Stochastic Flagellar Motor Switch.
Xie, Li; Altindal, Tuba; Wu, Xiao-Lun
2015-01-01
Marine bacterium Vibrio alginolyticus uses a single polar flagellum to navigate in an aqueous environment. Similar to Escherichia coli cells, the polar flagellar motor has two states; when the motor is counter-clockwise, the cell swims forward and when the motor is clockwise, the cell swims backward. V. alginolyticus also incorporates a direction randomization step at the start of the forward swimming interval by flicking its flagellum. To gain an understanding on how the polar flagellar motor switch is regulated, distributions of the forward Δf and backward Δb intervals are investigated herein. We found that the steady-state probability density functions, P(Δf) and P(Δb), of freely swimming bacteria are strongly peaked at a finite time, suggesting that the motor switch is not Poissonian. The short-time inhibition is sufficiently strong and long lasting, i.e., several hundred milliseconds for both intervals, which is readily observed and characterized. Treating motor reversal dynamics as a first-passage problem, which results from conformation fluctuations of the motor switch, we calculated P(Δf) and P(Δb) and found good agreement with the measurements.
Do physiotherapy staff record treatment time accurately? An observational study.
Bagley, Pam; Hudson, Mary; Green, John; Forster, Anne; Young, John
2009-09-01
To assess the reliability of duration of treatment time measured by physiotherapy staff in early-stage stroke patients. Comparison of physiotherapy staff's recording of treatment sessions and video recording. Rehabilitation stroke unit in a general hospital. Thirty-nine stroke patients without trunk control or who were unable to stand with an erect trunk without the support of two therapists recruited to a randomized trial evaluating the Oswestry Standing Frame. Twenty-six physiotherapy staff who were involved in patient treatment. Contemporaneous recording by physiotherapy staff of treatment time (in minutes) compared with video recording. Intraclass correlation with 95% confidence interval and the Bland and Altman method for assessing agreement by calculating the mean difference (standard deviation; 95% confidence interval), reliability coefficient and 95% limits of agreement for the differences between the measurements. The mean duration (standard deviation, SD) of treatment time recorded by physiotherapy staff was 32 (11) minutes compared with 25 (9) minutes as evidenced in the video recording. The mean difference (SD) was -6 (9) minutes (95% confidence interval (CI) -9 to -3). The reliability coefficient was 18 minutes and the 95% limits of agreement were -24 to 12 minutes. Intraclass correlation coefficient for agreement between the two methods was 0.50 (95% CI 0.12 to 0.73). Physiotherapy staff's recording of duration of treatment time was not reliable and was systematically greater than the video recording.
Calculation of new snow densities from sub-daily automated snow measurements
NASA Astrophysics Data System (ADS)
Helfricht, Kay; Hartl, Lea; Koch, Roland; Marty, Christoph; Lehning, Michael; Olefs, Marc
2017-04-01
In mountain regions there is an increasing demand for high-quality analysis, nowcasting and short-range forecasts of the spatial distribution of snowfall. Operational services, such as for avalanche warning, road maintenance and hydrology, as well as hydropower companies and ski resorts need reliable information on the depth of new snow (HN) and the corresponding water equivalent (HNW). However, the ratio of HNW to HN can vary from 1:3 to 1:30 because of the high variability of new snow density with respect to meteorological conditions. In the past, attempts were made to calculate new snow densities from meteorological parameters mainly using daily values of temperature and wind. Further complex statistical relationships have been used to calculate new snow densities on hourly to sub-hourly time intervals to drive multi-layer snow cover models. However, only a few long-term in-situ measurements of new snow density exist for sub-daily time intervals. Settling processes within the new snow due to loading and metamorphism need to be considered when computing new snow density. As the effect of these processes is more pronounced for long time intervals, a high temporal resolution of measurements is desirable. Within the pluSnow project data of several automatic weather stations with simultaneous measurements of precipitation (pluviometers), snow water equivalent (SWE) using snow pillows and snow depth (HS) measurements using ultrasonic rangers were analysed. New snow densities were calculated for a set of data filtered on the basis of meteorological thresholds. The calculated new snow densities were compared to results from existing new snow density parameterizations. To account for effects of settling of the snow cover, a case study based on a multi-year data set using the snow cover model SNOWPACK at Weissfluhjoch was performed. Measured median values of hourly new snow densities at the different stations range from 54 to 83 kgm-3. This is considerably lower than a 1:10 approximation (i.e. 100 kgm-3), which is mainly based on daily values in the Alps. Variations in new snow density could not be explained in a satisfactory manner using meteorological data measured at the same location. Likewise, some of the tested parametrizations of new snow density, which primarily use air temperature as a proxy, result in median new snow densities close to the ones from automated measurements, but show only a low correlation between calculated and measured new snow densities. The case study on the influence of snow settling on HN resulted on average in an underestimation of HN by 17%, which corresponds to 2-3% of the cumulated HN from the previous 24 hours. Therefore, the mean hourly new snow densities may be overestimated by 14%. The analysis in this study is especially limited with respect to the meteorological influence on the HS measurement using ultra-sonic rangers. Nevertheless, the reasonable mean values encourage calculating new snow densities from standard hydro-meteorological measurements using more precise observation devices such as optical snow depth sensors and more sensitive scales for SWE measurements also on sub-daily time-scales.
Mo, Shiwei; Chow, Daniel H K
2018-05-19
Motor control, related to running performance and running related injuries, is affected by progression of fatigue during a prolonged run. Distance runners are usually recommended to train at or slightly above anaerobic threshold (AT) speed for improving performance. However, running at AT speed may result in accelerated fatigue. It is not clear how one adapts running gait pattern during a prolonged run at AT speed and if there are differences between runners with different training experience. To compare characteristics of stride-to-stride variability and complexity during a prolonged run at AT speed between novice runners (NR) and experienced runners (ER). Both NR (n = 17) and ER (n = 17) performed a treadmill run for 31 min at his/her AT speed. Stride interval dynamics was obtained throughout the run with the middle 30 min equally divided into six time intervals (denoted as T1, T2, T3, T4, T5 and T6). Mean, coefficient of variation (CV) and scaling exponent alpha of stride intervals were calculated for each interval of each group. This study revealed mean stride interval significantly increased with running time in a non-linear trend (p<0.001). The stride interval variability (CV) maintained relatively constant for NR (p = 0.22) and changed nonlinearly for ER (p = 0.023) throughout the run. Alpha was significantly different between groups at T2, T5 and T6, and nonlinearly changed with running time for both groups with slight differences. These findings provided insights into how the motor control system adapts to progression of fatigue and evidences that long-term training enhances motor control. Although both ER and NR could regulate gait complexity to maintain AT speed throughout the prolonged run, ER also regulated stride interval variability to achieve the goal. Copyright © 2018. Published by Elsevier B.V.
Time Analysis of Building Dynamic Response Under Seismic Action. Part 2: Example of Calculation
NASA Astrophysics Data System (ADS)
Ufimtcev, E. M.
2017-11-01
The second part of the article illustrates the use of the time analysis method (TAM) by the example of the calculation of a 3-storey building, the design dynamic model (DDM) of which is adopted in the form of a flat vertical cantilever rod with 3 horizontal degrees of freedom associated with floor and coverage levels. The parameters of natural oscillations (frequencies and modes) and the results of the calculation of the elastic forced oscillations of the building’s DDM - oscillograms of the reaction parameters on the time interval t ∈ [0; 131,25] sec. The obtained results are analyzed on the basis of the computed values of the discrepancy of the DDS motion equation and the comparison of the results calculated on the basis of the numerical approach (FEM) and the normative method set out in SP 14.13330.2014 “Construction in Seismic Regions”. The data of the analysis testify to the accuracy of the construction of the computational model as well as the high accuracy of the results obtained. In conclusion, it is revealed that the use of the TAM will improve the strength of buildings and structures subject to seismic influences when designing them.
Statistical properties of several models of fractional random point processes
NASA Astrophysics Data System (ADS)
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
Relative-Error-Covariance Algorithms
NASA Technical Reports Server (NTRS)
Bierman, Gerald J.; Wolff, Peter J.
1991-01-01
Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.
The orbital evolution of the Aten asteroids over 11,550 years (9300 BC to 2250 AD)
NASA Astrophysics Data System (ADS)
Zausaev, A. F.; Pushkarev, A. N.
1991-04-01
The orbital evolution of five Aten asteroids was monitored by the Everhart method in the time interval from 9300 BC to 2250 AD. The closest encounters with large planets in the evolution process are calculated. Four out of five asteroids exhibit stable resonances with earth and Venus over the period from 9300 BC to 2250 AD.
Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
Orbital time scale and new C-isotope record for Cenomanian-Turonian boundary stratotype
NASA Astrophysics Data System (ADS)
Sageman, Bradley B.; Meyers, Stephen R.; Arthur, Michael A.
2006-02-01
Previous time scales for the Cenomanian-Turonian boundary (CTB) interval containing Oceanic Anoxic Event II (OAE II) vary by a factor of three. In this paper we present a new orbital time scale for the CTB stratotype established independently of radiometric, biostratigraphic, or geochemical data sets, update revisions of CTB biostratigraphic zonation, and provide a new detailed carbon isotopic record for the CTB study interval. The orbital time scale allows an independent assessment of basal biozone ages relative to the new CTB date of 93.55 Ma (GTS04). The δ13Corg data document the abrupt onset of OAE II, significant variability in δ13Corg values, and values enriched to almost -22‰. These new data underscore the difficulty in defining OAE II termination. Using the new isotope curve and time scale, estimates of OAE II duration can be determined and exported to other sites based on integration of well-established chemostratigraphic and biostratigraphic datums. The new data will allow more accurate calculations of biogeochemical and paleobiologic rates across the CTB.
NASA Astrophysics Data System (ADS)
Shchinnikov, P. A.; Safronov, A. V.
2014-12-01
General principles of a procedure for matching energy balances of thermal power plants (TPPs), whose use enhances the accuracy of information-measuring systems (IMSs) during calculations of performance characteristics (PCs), are stated. To do this, there is the possibility for changing values of measured and calculated variables within intervals determined by measurement errors and regulations. An example of matching energy balances of the thermal power plants with a T-180 turbine is made. The proposed procedure allows one to reduce the divergence of balance equations by 3-4 times. It is shown also that the equipment operation mode affects the profit deficiency. Dependences for the divergence of energy balances on the deviation of input parameters and calculated data for the fuel economy before and after matching energy balances are represented.
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Daluwatte, Chathuri; Vicente, Jose; Galeotti, Loriano; Johannesen, Lars; Strauss, David G; Scully, Christopher G
Performance of ECG beat detectors is traditionally assessed on long intervals (e.g.: 30min), but only incorrect detections within a short interval (e.g.: 10s) may cause incorrect (i.e., missed+false) heart rate limit alarms (tachycardia and bradycardia). We propose a novel performance metric based on distribution of incorrect beat detection over a short interval and assess its relationship with incorrect heart rate limit alarm rates. Six ECG beat detectors were assessed using performance metrics over long interval (sensitivity and positive predictive value over 30min) and short interval (Area Under empirical cumulative distribution function (AUecdf) for short interval (i.e., 10s) sensitivity and positive predictive value) on two ECG databases. False heart rate limit and asystole alarm rates calculated using a third ECG database were then correlated (Spearman's rank correlation) with each calculated performance metric. False alarm rates correlated with sensitivity calculated on long interval (i.e., 30min) (ρ=-0.8 and p<0.05) and AUecdf for sensitivity (ρ=0.9 and p<0.05) in all assessed ECG databases. Sensitivity over 30min grouped the two detectors with lowest false alarm rates while AUecdf for sensitivity provided further information to identify the two beat detectors with highest false alarm rates as well, which was inseparable with sensitivity over 30min. Short interval performance metrics can provide insights on the potential of a beat detector to generate incorrect heart rate limit alarms. Published by Elsevier Inc.
Mannion, Melissa L; Xie, Fenglong; Baddley, John; Chen, Lang; Curtis, Jeffrey R; Saag, Kenneth; Zhang, Jie; Beukelman, Timothy
2016-09-05
To investigate the utilization of health care services before and after transfer from pediatric to adult rheumatology care in clinical practice. Using US commercial claims data from January 2005 through August 2012, we identified individuals with a JIA diagnosis code from a pediatric rheumatologist followed by any diagnosis code from an adult rheumatologist. Individuals had 6 months observable time before the last pediatric visit and 6 months after the first adult visit. Medication, emergency room, physical therapy use, and diagnosis codes were compared between the pediatric and adult interval using McNemar's test. The proportion of days covered (PDC) of TNFi for the time between last pediatric and first adult visit was calculated. We identified 58 individuals with JIA who transferred from pediatric to adult rheumatology care after the age of 14. The median age at the last pediatric rheumatology visit was 18.1 years old and the median transfer interval was 195 days. 29 % of patients received NSAIDs in the adult interval compared to 43 % in the pediatric interval (p = 0.06). In the pediatric interval, 71 % received a JRA and 0 % received an RA physician diagnosis code compared to 28 and 45 %, respectively, in the adult interval. The median PDC for patients receiving a TNFi was 0.75 during the transfer interval. Individuals with JIA who transferred to adult care were more likely receive a diagnosis of RA instead of JRA and were less likely to receive NSAIDs, but had no significant immediate changes to other medication use.
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
ERIC Educational Resources Information Center
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Practicing urologist learning laparoscopy: no short cut to short cuts!
Mahmud, Syed Mamun; Mishra, Shashikant; Desai, Mahesh Ramanlal
2011-05-01
To emphasize the importance of regular exercising in dry lab in initial phase of learning of laparoscopic surgery by a practicing urologist. The study was performed at Dry Lab -Jayaramdas Patel Academic Centre (JPAC) attached to Muljibhai Patel Urological Hospital, Nadiad, India. The study is based on 30 sets of exercises of four standard tasks utilized to learn Hand-Eye coordination for Laparoscopic Surgery. All sets were performed by a single participant over a period of 19 days and the exercise record was retrospectively analyzed. The participant had limited exposure of one year in a low volume laparoscopy center. Correlation between Exercise number and Task Completion Time (TCT) was calculated by Pearson's Correlation Coefficient and its significance is assessed by Student paired t test. The current study describes 30 exercises of 4 standard tasks for hand-eye-coordination. Although the study was completed in 19 days but there were two intervals which point to the objective of this study. First interval was of 3 days and it occurred after 4th exercise. At 5th exercise the Task Completion Time started rising more than the 2nd exercise. This regression further worsened on 6th exercise which had an interval of 2 days. Here the (TCT) went up almost equal to 1st exercise (1050 vs 1054 seconds). Mean time for IT1, IT2, IT3, IT4 and TCT of over all exercises were calculated as 24.2 +/- 3.7, 121.9 +/- 54.9, 233 +/- 73.5, 199 +/- 55.1 and 582.5 +/- 174.8 seconds respectively. Significant correlation was noticed between number of exercises performed and improvement in time taken for individual tasks (IT 2 to IT4) and TCT. However there was no significant impact on Task 1. Regular Dry Lab exercises improves hand eye coordination and psychomotor skill dedicated continuous exercising has significant impact in reducing TCT.
NASA Technical Reports Server (NTRS)
Baxley, Brian; Swieringa, Kurt; Berckefeldt, Rick; Boyle, Dan
2017-01-01
NASA's first Air Traffic Management Technology Demonstration (ATD-1) subproject successfully completed a 19-day flight test of an Interval Management (IM) avionics prototype. The prototype was built based on IM standards, integrated into two test aircraft, and then flown in real-world conditions to determine if the goals of improving aircraft efficiency and airport throughput during high-density arrival operations could be met. The ATD-1 concept of operation integrates advanced arrival scheduling, controller decision support tools, and the IM avionics to enable multiple time-based arrival streams into a high-density terminal airspace. IM contributes by calculating airspeeds that enable an aircraft to achieve a spacing interval behind the preceding aircraft. The IM avionics uses its data (route of flight, position, etc.) and Automatic Dependent Surveillance-Broadcast (ADS-B) state data from the Target aircraft to calculate this airspeed. The flight test demonstrated that the IM avionics prototype met the spacing accuracy design goal for three of the four IM operation types tested. The primary issue requiring attention for future IM work is the high rate of IM speed commands and speed reversals. In total, during this flight test, the IM avionics prototype showed significant promise in contributing to the goals of improving aircraft efficiency and airport throughput.
Correlation of lithologic and sonic logs from the COST No. B-2 well with seismic reflection data
King, K.C.
1979-01-01
The purpose of this study was to correlate events recorded on seismic records with changes in lithology recorded from sample descriptions from the Continental Offshore Stratigraphic Test (COST) No. B-2 well. The well is located on the U.S. mid-Atlantic Outer Continental Shelf about 146 km east of Atlantic City, N.J. (see location map). Lithologic data are summarized from the sample descriptions of Smith and others (1976). Sonic travel times were read at 0.15 m intervals in the well using a long-space sonic logging tool. Interval velocities, reflection coefficients and a synthetic seismogram were calculated from the sonic log.
Kalim, Shahid; Nazir, Shaista; Khan, Zia Ullah
2013-01-01
Protocols based on newer high sensitivity Troponin T (hsTropT) assays can rule in a suspected Acute Myocardial Infarction (AMI) as early as 3 hours. We conducted this study to audit adherence to our Trust's newly introduced AMI diagnostic protocol based on paired hsTropT testing at 0 and 3 hours. We retrospectively reviewed data of all patients who had hsTropT test done between 1st and 7th May 2012. Patient's demographics, utility of single or paired samples, time interval between paired samples, patient's presenting symptoms and ECG findings were noted and their means, medians, Standard deviations and proportions were calculated. A total of 66 patients had hsTropT test done during this period. Mean age was 63.30 +/- 17.46 years and 38 (57.57%) were males. Twenty-four (36.36%) patients had only single, rather than protocol recommended paired hsTropT samples, taken. Among the 42 (63.63%) patients with paired samples, the mean time interval was found to be 4.41 +/- 5.7 hours. Contrary to the recommendations, 15 (22.73%) had a very long whereas 2 (3.03%) had a very short time interval between two samples. A subgroup analysis of patients with single samples, found only 2 (3.03%) patient with ST-segment elevation, appropriate for single testing. Our study confirmed that in a large number of patients the protocol for paired sampling or a recommended time interval of 3 hours between 2 samples was not being followed.
Saber, Ali; Tafazzoli, Milad; Mortazavian, Soroosh; James, David E
2018-02-01
Two common wetland plants, Pampas Grass (Cortaderia selloana) and Lucky Bamboo (Dracaena sanderiana), were used in hydroponic cultivation systems for the treatment of simulated high-sulfate wastewaters. Plants in initial experiments at pH 7.0 removed sulfate more efficiently compared to the same experimental conditions at pH 6.0. Results at sulfate concentrations of 50, 200, 300, 600, 900, 1200, 1500 and 3000 mg/L during three consecutive 7-day treatment periods with 1-day rest intervals, showed decreasing trends of both removal efficiencies and uptake rates with increasing sulfate concentrations from the first to the second to the third 7-day treatment periods. Removed sulfate masses per unit dry plant mass, calculated after 23 days, showed highest removal capacity at 600 mg/L sulfate for both plants. A Langmuir-type isotherm best described sulfate uptake capacity of both plants. Kinetic studies showed that compared to pseudo first-order kinetics, pseudo-second order kinetic models slightly better described sulfate uptake rates by both plants. The Elovich kinetic model showed faster rates of attaining equilibrium at low sulfate concentrations for both plants. The dimensionless Elovich model showed that about 80% of sulfate uptake occurred during the first four days' contact time. Application of three 4-day contact times with 2-day rest intervals at high sulfate concentrations resulted in slightly higher uptakes compared to three 7-day contact times with 1-day rest intervals, indicating that pilot-plant scale treatment systems could be sized with shorter contact times and longer rest-intervals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Feasibility of noninvasive fetal electrocardiographic monitoring in a clinical setting.
Arya, Bhawna; Govindan, Rathinaswamy; Krishnan, Anita; Duplessis, Adre; Donofrio, Mary T
2015-06-01
Cardiac rhythm is an essential component of fetal cardiac evaluation. The Monica AN24 is a fetal heart rate monitor that may provide a quick, inexpensive modality for obtaining a noninvasive fetal electrocardiogram (fECG) in a clinical setting. The fECG device has the ability to acquire fECG signals and allow calculation of fetal cardiac time intervals between 16- and 42-week gestational age (GA). We aimed to demonstrate the feasibility of fECG acquisition in a busy fetal cardiology clinic using the Monica fetal heart rate monitor. This is a prospective observational pilot study of fECG acquired from fetuses referred for fetal echocardiography. Recordings were performed for 5-15 min. Maternal signals were attenuated and fECG averaged. fECG and fetal cardiac time intervals (PR, QRS, RR, and QT) were evaluated by two cardiologists independently and inter-observer reliability was assessed using intraclass coefficient (ICC). Sixty fECGs were collected from 50 mothers (mean GA 28.1 ± 6.1). Adequate signal-averaged waveforms were obtained in 20 studies with 259 cardiac cycles. Waveforms could not be obtained between 26 and 30 weeks. Fetal cardiac time intervals were measured and were reproducible for PR (ICC = 0.89; CI 0.77-0.94), QRS (ICC = 0.79; CI 0.51-0.91), and RR (ICC = 0.77; CI 0.53-0.88). QT ICC was poor due to suboptimal T-wave tracings. Acquisition of fECG and measurement of fetal cardiac time intervals is feasible in a clinical setting between 19- and 42-week GA, though tracings are difficult to obtain, especially between 26 and 30 weeks. There was high reliability in fetal cardiac time intervals measurements, except for QT. The device may be useful for assessing atrioventricular/intraventricular conduction in fetuses from 20 to 26 and >30 weeks. Techniques to improve signal acquisition, namely T-wave amplification, are ongoing.
Tillett, William; Charlton, Rachel; Nightingale, Alison; Snowball, Julia; Green, Amelia; Smith, Catherine; Shaddick, Gavin; McHugh, Neil
2017-12-01
To describe the time interval between the onset of psoriasis and PsA in the UK primary care setting and compare with a large, well-classified secondary care cohort. Patients with PsA and/or psoriasis were identified in the UK Clinical Practice Research Datalink (CPRD). The secondary care cohort comprised patients from the Bath PsA longitudinal observational cohort study. For incident PsA patients in the CPRD who also had a record of psoriasis, the time interval between PsA diagnosis and first psoriasis record was calculated. Comparisons were made with the time interval between diagnoses in the Bath cohort. There were 5272 eligible PsA patients in the CPRD and 815 in the Bath cohort. In both cohorts, the majority of patients (82.3 and 61.3%, respectively) had psoriasis before their PsA diagnosis or within the same calendar year (10.5 and 23.8%), with only a minority receiving their PsA diagnosis first (7.1 and 14.8%). Excluding those who presented with arthritis before psoriasis, the median time between diagnoses was 8 years [interquartile range (IQR) 2-15] in the CPRD and 7 years (IQR 0-20) in the Bath cohort. In the CPRD, 60.1 and 75.1% received their PsA diagnosis within 10 and 15 years of their psoriasis diagnosis, respectively; this was comparable with 57.2 and 67.7% in the Bath cohort. A similar distribution for the time interval between psoriasis and arthritis was observed in the CPRD and secondary care cohort. These data can inform screening strategies and support the validity of data from each cohort. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com
The 1983 tail-era series. Volume 1: ISEE 3 plasma
NASA Technical Reports Server (NTRS)
Fairfield, D. H.; Phillips, J. L.
1991-01-01
Observations from the ISEE 3 electron analyzer are presented in plots. Electrons were measured in 15 continuous energy levels between 8.5 and 1140 eV during individual 3-sec spacecraft spins. Times associated with each data point are the beginning time of the 3 sec data collection interval. Moments calculated from the measured distribution function are shown as density, temperature, velocity, and velocity azimuthal angle. Spacecraft ephemeris is shown at the bottom in GSE and GSM coordinates in units of Earth radii, with vertical ticks on the time axis corresponding to the printed positions.
Parameter Transient Behavior Analysis on Fault Tolerant Control System
NASA Technical Reports Server (NTRS)
Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob
2003-01-01
In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.
Depletion of mesospheric sodium during extended period of pulsating aurora
NASA Astrophysics Data System (ADS)
Takahashi, T.; Hosokawa, K.; Nozawa, S.; Tsuda, T. T.; Ogawa, Y.; Tsutsumi, M.; Hiraki, Y.; Fujiwara, H.; Kawahara, T. D.; Saito, N.; Wada, S.; Kawabata, T.; Hall, C.
2017-01-01
We quantitatively evaluated the Na density depletion due to charge transfer reactions between Na atoms and molecular ions produced by high-energy electron precipitation during a pulsating aurora (PsA). An extended period of PsA was captured by an all-sky camera at the European Incoherent Scatter (EISCAT) radar Tromsø site (69.6°N, 19.2°E) during a 2 h interval from 00:00 to 02:00 UT on 25 January 2012. During this period, using the EISCAT very high frequency (VHF) radar, we detected three intervals of intense ionization below 100 km that were probably caused by precipitation of high-energy electrons during the PsA. In these intervals, the sodium lidar at Tromsø observed characteristic depletion of Na density at altitudes between 97 and 100 km. These Na density depletions lasted for 8 min and represented 5-8% of the background Na layer. To examine the cause of this depletion, we modeled the depletion rate based on charge transfer reactions with NO+ and O2+ while changing the R value which is defined as the ratio of NO+ to O2+ densities, from 1 to 10. The correlation coefficients between observed and modeled Na density depletion calculated with typical value R = 3 for time intervals T1, T2, and T3 were 0.66, 0.80, and 0.67, respectively. The observed Na density depletion rates fall within the range of modeled depletion rate calculated with R from 1 to 10. This suggests that the charge transfer reactions triggered by the auroral impact ionization at low altitudes are the predominant process responsible for Na density depletion during PsA intervals.
NASA Astrophysics Data System (ADS)
Nakamura, Yuki; Ashi, Juichiro; Morita, Sumito
2016-04-01
To clarify timing and scale of past submarine landslides is important to understand formation processes of the landslides. The study area is in a part of continental slope of the Japan Trench, where a number of large-scale submarine landslide (slump) deposits have been identified in Pliocene and Quaternary formations by analysing METI's 3D seismic data "Sanrikuoki 3D" off Shimokita Peninsula (Morita et al., 2011). As structural features, swarm of parallel dikes which are likely dewatering paths formed accompanying the slumping deformation, and slip directions are basically perpendicular to the parallel dikes. Therefore, parallel dikes are good indicator for estimation of slip directions. Slip direction of each slide was determined one kilometre grid in the survey area of 40 km x 20 km. The remarkable slip direction varies from Pliocene to Quaternary in the survey area. Parallel dike structure is also available for the distinguishment of the slump deposit and normal deposit on time slice images. By tracing outline of slump deposits at each depth, we identified general morphology of the overall slump deposits, and calculated the volume of the extracted slump deposits so as to estimate the scale of each event. We investigated temporal and spatial variation of depositional pattern of the slump deposits. Calculating the generation interval of the slumps, some periodicity is likely recognized, especially large slump do not occur in succession. Additionally, examining the relationship of the cumulative volume and the generation interval, certain correlation is observed in Pliocene and Quaternary. Key words: submarine landslides, 3D seismic data, Shimokita Peninsula
Application of point kinetics equations to the design of a reactivity meter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binney, S.E.; Bakir, A.J.M.
1988-01-01
The time-dependent reactivity of a nuclear reactor is obviously one of the most important reactor parameters that describes the state of the reactor. Although several different types of techniques exist to measure reactivity, only the kinetic method is described here. The paper illustrates the measured reactor power and calculated reactivity for a 70 cents step change in reactivity. These data were taken at 1-s time intervals. It is seen that the reactivity, initially at zero, rises rapidly to a predetermined value (determined by the reactivity change induced in the system) and then returns to zero as the reactor is reestablishedmore » in a critical situation by insertion of another control rod. It is concluded that the method of Tuttle has been adapted to produce a reliable, on-line calculation of reactivity from a time-dependent reactor power signal.« less
Dead time corrections for inbeam γ-spectroscopy measurements
NASA Astrophysics Data System (ADS)
Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.
2017-08-01
Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.
Detrended fluctuation analysis of non-stationary cardiac beat-to-beat interval of sick infants
NASA Astrophysics Data System (ADS)
Govindan, Rathinaswamy B.; Massaro, An N.; Al-Shargabi, Tareq; Niforatos Andescavage, Nickie; Chang, Taeun; Glass, Penny; du Plessis, Adre J.
2014-11-01
We performed detrended fluctuation analysis (DFA) of cardiac beat-to-beat intervals (RRis) collected from sick newborn infants over 1-4 day periods. We calculated four different metrics from the DFA fluctuation function: the DFA exponents αL (>40 beats up to one-fourth of the record length), αs (15-30 beats), root-mean-square (RMS) fluctuation on a short-time scale (20-50 beats), and RMS fluctuation on a long-time scale (110-150 beats). Except αL , all metrics clearly distinguished two groups of newborn infants (favourable vs. adverse) with well-characterized outcomes. However, the RMS fluctuations distinguished the two groups more consistently over time compared to αS . Furthermore, RMS distinguished the RRi of the two groups earlier compared to the DFA exponent. In all the three measures, the favourable outcome group displayed higher values, indicating a higher magnitude of (auto-)correlation and variability, thus normal physiology, compared to the adverse outcome group.
A long time span relativistic precession model of the Earth
NASA Astrophysics Data System (ADS)
Tang, Kai; Soffel, Michael H.; Tao, Jin-He; Han, Wen-Biao; Tang, Zheng-Hong
2015-04-01
A numerical solution to the Earth's precession in a relativistic framework for a long time span is presented here. We obtain the motion of the solar system in the Barycentric Celestial Reference System by numerical integration with a symplectic integrator. Special Newtonian corrections accounting for tidal dissipation are included in the force model. The part representing Earth's rotation is calculated in the Geocentric Celestial Reference System by integrating the post-Newtonian equations of motion published by Klioner et al. All the main relativistic effects are included following Klioner et al. In particular, we consider several relativistic reference systems with corresponding time scales, scaled constants and parameters. Approximate expressions for Earth's precession in the interval ±1 Myr around J2000.0 are provided. In the interval ±2000 years around J2000.0, the difference compared to the P03 precession theory is only several arcseconds and the results are consistent with other long-term precession theories. Supported by the National Natural Science Foundation of China.
Two Billion Years of Magmatism in One Place on Mars
NASA Astrophysics Data System (ADS)
Taylor, G. J.
2017-05-01
Thomas Lapen and Minako Righter (University of Houston), and colleagues at Aarhus University (Denmark), the Universities of Washington (Seattle), Wisconsin (Madison), California (Berkeley), and Arizona (Tucson), and Purdue University (Indiana) show that a geochemically-related group of Martian meteorites formed over a much longer time span than thought previously. So-called depleted shergottites formed during the time interval 325 to 600 million years ago, but now age dating on a recently discovered Martian meteorite, Northwest Africa (NWA) 7635, extends that interval by 1800 million years to 2400 million years. NWA 7635 and almost all other depleted shergottites were ejected from Mars in the same impact event, as defined by their same cosmic-ray exposure age of 1 million years, so all resided in one small area on Mars. This long time span of volcanic activity in the same place on the planet indicates that magma production was continuous, consistent with geophysical calculations of magma generation in plumes of hot mantle rising from the core-mantle boundary deep inside Mars.
Trumm, C G; Glaser, C; Paasche, V; Crispin, A; Popp, P; Küttner, B; Francke, M; Nissen-Meyer, S; Reiser, M
2006-04-01
Quantification of the impact of a PACS/RIS-integrated speech recognition system (SRS) on the time expenditure for radiology reporting and on hospital-wide report availability (RA) in a university institution. In a prospective pilot study, the following parameters were assessed for 669 radiographic examinations (CR): 1. time requirement per report dictation (TED: dictation time (s)/number of images [examination] x number of words [report]) with either a combination of PACS/tape-based dictation (TD: analog dictation device/mini-cassette/transcription) or PACS/RIS/speech recognition system (RR: remote recognition/transcription and OR: online recognition/self-correction by radiologist), respectively, and 2. the Report Turnaround Time (RTT) as the time interval from the entry of the first image into the PACS to the available RIS/HIS report. Two equal time periods were chosen retrospectively from the RIS database: 11/2002 - 2/2003 (only TD) and 11/2003 - 2/2004 (only RR or OR with speech recognition system [SRS]). The mid-term (> or = 24 h, 24 h intervals) and short-term (< 24 h, 1 h intervals) RA after examination completion were calculated for all modalities and for CR, CT, MR and XA/DS separately. The relative increase in the mid-term RA (RIMRA: related to total number of examinations in each time period) and increase in the short-term RA (ISRA: ratio of available reports during the 1st to 24th hour) were calculated. Prospectively, there was a significant difference between TD/RR/OR (n = 151/257/261) regarding mean TED (0.44/0.54/0.62 s [per word and image]) and mean RTT (10.47/6.65/1.27 h), respectively. Retrospectively, 37 898/39 680 reports were computed from the RIS database for the time periods of 11/2002 - 2/2003 and 11/2003 - 2/2004. For CR/CT there was a shift of the short-term RA to the first 6 hours after examination completion (mean cumulative RA 20 % higher) with a more than three-fold increase in the total number of available reports within 24 hours (all modalities). The RIMRA for CR/CT/MR was 3.1/5.8/4.0 in the first 24 hours, and 2.0 for XA/DS in the second 24-hour interval. In comparison to tape-based dictation, an SRS results in a significantly higher primary time expenditure and a modified report dictation workflow. In a university institution, a PACS/RIS-integrated SRS achieves a marked improvement in both short- and mid-term RA which eventually results in an improvement in patient care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vujovic, Olga, E-mail: olga.vujovic@lhsc.on.ca; Yu, Edward; Cherian, Anil
Purpose: A retrospectivechart review was conducted to determine whether the time interval from breast-conserving surgery to breast irradiation (surgery-radiation therapy interval) in early stage node-negative breast cancer had any detrimental effects on recurrence rates. Methods and Materials: There were 566 patients with T1 to T3, N0 breast cancer treated with breast-conserving surgery and breast irradiation and without adjuvant systemic treatment between 1985 and 1992. The surgery-to-radiation therapy intervals used for analysis were 0 to 8 weeks (201 patients), >8 to 12 weeks (233 patients), >12 to 16 weeks (91 patients), and >16 weeks (41 patients). Kaplan-Meier estimates of time to local recurrence, disease-free survival, distantmore » disease-free survival, cause-specific survival, and overall survival rates were calculated. Results: Median follow-up was 17.4 years. Patients in all 4 time intervals were similar in terms of characteristics and pathologic features. There were no statistically significant differences among the 4 time groups in local recurrence (P=.67) or disease-free survival (P=.82). The local recurrence rates at 5, 10, and 15 years were 4.9%, 11.5%, and 15.0%, respectively. The distant disease relapse rates at 5, 10, and 15 years were 10.6%, 15.4%, and 18.5%, respectively. The disease-free failure rates at 5, 10, and 15 years were 20%, 32.3%, and 39.8%, respectively. Cause-specific survival rates at 5, 10, and 15 years were 92%, 84.6%, and 79.8%, respectively. The overall survival rates at 5, 10, and 15 years were 89.3%, 79.2%, and 66.9%, respectively. Conclusions: Surgery-radiation therapy intervals up to 16 weeks from breast-conserving surgery are not associated with any increased risk of recurrence in early stage node-negative breast cancer. There is a steady local recurrence rate of 1% per year with adjuvant radiation alone.« less
Keogh, Pauraic; Ray, Noel J; Lynch, Christopher D; Burke, Francis M; Hannigan, Ailish
2004-12-01
This investigation determined the minimum exposure times consistent with optimised surface microhardness parameters for a commercial resin composite cured using a "first-generation" light-emitting diode activation lamp. Disk specimens were exposed and surface microhardness numbers measured at the top and bottom surfaces for elapsed times of 1 hour and 24 hours. Bottom/top microhardness number ratios were also calculated. Most microhardness data increased significantly over the elapsed time interval but microhardness ratios (bottom/top) were dependent on exposure time only. A minimum exposure of 40 secs is appropriate to optimise microhardness parameters for the combination of resin composite and lamp investigated.
Variance Analysis of Unevenly Spaced Time Series Data
NASA Technical Reports Server (NTRS)
Hackman, Christine; Parker, Thomas E.
1996-01-01
We have investigated the effect of uneven data spacing on the computation of delta (sub chi)(gamma). Evenly spaced simulated data sets were generated for noise processes ranging from white phase modulation (PM) to random walk frequency modulation (FM). Delta(sub chi)(gamma) was then calculated for each noise type. Data were subsequently removed from each simulated data set using typical two-way satellite time and frequency transfer (TWSTFT) data patterns to create two unevenly spaced sets with average intervals of 2.8 and 3.6 days. Delta(sub chi)(gamma) was then calculated for each sparse data set using two different approaches. First the missing data points were replaced by linear interpolation and delta (sub chi)(gamma) calculated from this now full data set. The second approach ignored the fact that the data were unevenly spaced and calculated delta(sub chi)(gamma) as if the data were equally spaced with average spacing of 2.8 or 3.6 days. Both approaches have advantages and disadvantages, and techniques are presented for correcting errors caused by uneven data spacing in typical TWSTFT data sets.
Alternative Treatment for Bleeding Peristomal Varices: Percutaneous Parastomal Embolization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pabon-Ramos, Waleska M., E-mail: waly.pr@duke.edu; Niemeyer, Matthew M.; Dasika, Narasimham L., E-mail: narasimh@med.umich.edu
2013-10-15
Purpose: To describe how peristomal varices can be successfully embolized via a percutaneous parastomal approach. Methods: The medical records of patients who underwent this procedure between December 1, 2000, and May 31, 2008, were retrospectively reviewed. Procedural details were recorded. Median fluoroscopy time and bleeding-free interval were calculated. Results: Seven patients underwent eight parastomal embolizations. The technical success rate was 88 % (one failure). All embolizations were performed with coils combined with a sclerosant, another embolizing agent, or both. Of the seven successful parastomal embolizations, there were three cases of recurrent bleeding; the median time to rebleeding was 45 daysmore » (range 26-313 days). The remaining four patients did not develop recurrent bleeding during the follow-up period; their median bleeding-free interval was 131 days (range 40-659 days). Conclusion: This case review demonstrated that percutaneous parastomal embolization is a feasible technique to treat bleeding peristomal varices.« less
Guimaraes, Carolina V; Grzeszczuk, Robert; Bisset, George S; Donnelly, Lane F
2018-03-01
When implementing or monitoring department-sanctioned standardized radiology reports, feedback about individual faculty performance has been shown to be a useful driver of faculty compliance. Most commonly, these data are derived from manual audit, which can be both time-consuming and subject to sampling error. The purpose of this study was to evaluate whether a software program using natural language processing and machine learning could accurately audit radiologist compliance with the use of standardized reports compared with performed manual audits. Radiology reports from a 1-month period were loaded into such a software program, and faculty compliance with use of standardized reports was calculated. For that same period, manual audits were performed (25 reports audited for each of 42 faculty members). The mean compliance rates calculated by automated auditing were then compared with the confidence interval of the mean rate by manual audit. The mean compliance rate for use of standardized reports as determined by manual audit was 91.2% with a confidence interval between 89.3% and 92.8%. The mean compliance rate calculated by automated auditing was 92.0%, within that confidence interval. This study shows that by use of natural language processing and machine learning algorithms, an automated analysis can accurately define whether reports are compliant with use of standardized report templates and language, compared with manual audits. This may avoid significant labor costs related to conducting the manual auditing process. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Carlsen, Katrine; Houen, Gunnar; Jakobsen, Christian; Kallemose, Thomas; Paerregaard, Anders; Riis, Lene B; Munkholm, Pia; Wewer, Vibeke
2017-09-01
To individualize timing of infliximab (IFX) treatment in children and adolescents with inflammatory bowel disease (IBD) using a patient-managed eHealth program. Patients with IBD, 10 to 17 years old, treated with IFX were prospectively included. Starting 4 weeks after their last infusion, patients reported a weekly symptom score and provided a stool sample for fecal calprotectin analysis. Based on symptom scores and fecal calprotectin results, the eHealth program calculated a total inflammation burden score that determined the timing of the next IFX infusion (4-12 wk after the previous infusion). Quality of Life was scored by IMPACT III. A control group was included to compare trough levels of IFX antibodies and concentrations and treatment intervals. Patients and their parents evaluated the eHealth program. There were 29 patients with IBD in the eHealth group and 21 patients with IBD in the control group. During the control period, 94 infusions were provided in the eHealth group (mean interval 9.5 wk; SD 2.3) versus 105 infusions in the control group (mean interval 6.9 wk; SD 1.4). Treatment intervals were longer in the eHealth group (P < 0.001). Quality of Life did not change during the study. Appearance of IFX antibodies did not differ between the 2 groups. Eighty percent of patients reported increased disease control and 63% (86% of parents) reported an improved knowledge of the disease. Self-managed, eHealth-individualized timing of IFX treatments, with treatment intervals of 4 to 12 weeks, was accompanied by no significant development of IFX antibodies. Patients reported better control and improved knowledge of their IBD.
Non-homogeneous Behaviour of the Spatial Distribution of Macrospicules
NASA Astrophysics Data System (ADS)
Gyenge, N.; Bennett, S.; Erdélyi, R.
2015-03-01
In this paper the longitudinal and latitudinal spatial distribution of macrospicules is examined. We found a statistical relationship between the active longitude (determined by sunspot groups) and the longitudinal distribution of macrospicules. This distribution of macrospicules shows an inhomogeneity and non-axisymmetrical behaviour in the time interval between June 2010 and December 2012, covered by observations of the Solar Dynamic Observatory (SDO) satellite. The enhanced positions of the activity and its time variation have been calculated. The migration of the longitudinal distribution of macrospicules shows a similar behaviour to that of the sunspot groups.
Anderson, D L
1975-03-21
The concept of a stressed elastic lithospheric plate riding on a viscous asthenosphere is used to calculate the recurrence interval of great earthquakes at convergent plate boundaries, the separation of decoupling and lithospheric earthquakes, and the migration pattern of large earthquakes along an arc. It is proposed that plate motions accelerate after great decoupling earthquakes and that most of the observed plate motions occur during short periods of time, separated by periods of relative quiescence.
IN VITRO MEASUREMENT OF TOTAL ANTIOXIDANT CAPACITY OF CRATAEGUS MACRACANTHA LODD LEAVES.
Miftode, Alina Monica; Stefanache, Alina; Spac, A F; Miftode, R F; Miron, Anca; Dorneanu, V
2016-01-01
Crataegus macracantha Lodd, family Rosaceae, is a very rare species in Europe, and unlike Crataegus monogyna is less investigated for pharmacologic activity. To analyze the ability of the lyophilisate of extract obtained from leaves of Crataegus macracantha Lodd (single plant at the Iaşi Botanical Garden) to capture free radicals in vitro. The lyophilisate obtained in Department of Pharmacognosy, Faculty of Pharmacy, "Grigore T. Popa" University of Medicine and Pharmacy Iaşi. The decreased absorbance of chromophore chlorpromazine radical cation in the presence of the lyophilized solutions was studied spectrophotometrically. The indicator radical cation, obtained by oxidation of chlorpromazine by potassium persulfate, has the maximum absorbance at 525 nm. Ascorbic acid was used as a standard antioxidant. The absorbance of radical solution was determined after the addition of a certain amount of lyophilisate at different time intervals. The antioxidant activity was calculated using the calibration curve obtained by plotting the variation in radical solution absorbance depending on ascorbic acid concentration. For each ascorbic acid concentration the area under the curve was calculated from plotting the percentage inhibition of the absorbance at two pre-established time intervals. The results confirm the antioxidant activity of the leaves of Crataegus Macracantha Lodd and by optimizing the proposed analytical methods the antiradical activity can be quickly evaluated with minimal reagent consumption.
Balquet, L; Noury-Desvaux, B; Jaquinandi, V; Mahé, G
2015-02-01
Diet is a modifiable risk factor of atherosclerosis. A 14-item food frequency questionnaire (FFQ) has been developed. The reproducibility of this FFQ is unknown in a student population whereas its use could be of interest. This FFQ allows calculating different scores for different food groups involved in cardiovascular disease. The vascular dietary score (VDS) can be calculated. The VSD ranges from -17 to +19. The higher the VSD, the better diet. Reproducibility was assessed in sports faculty students using mean tests comparing measurement 1 and 2 (minimum time interval ≥ 7 days) and intra-class correlation (ICC) tests. Thirty students (50% men) were included in a French Sports Faculty. Time between two FFQ assessments was 19 ± 9 days. Mean VSD was 0.50 ± 3.70 for the first assessment and 0.30 ± 3.14 for the second one (non significant). Any score for each food group was statistically significant between the first and the second measurement. ICC of VSD was 0.68 [95% confidence interval: 0.43-0.83]. This FFQ that assesses a risky vascular diet has good reproducibility. This tool could be useful for large studies involving students. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
PyGlobal: A toolkit for automated compilation of DFT-based descriptors.
Nath, Shilpa R; Kurup, Sudheer S; Joshi, Kaustubh A
2016-06-15
Density Functional Theory (DFT)-based Global reactivity descriptor calculations have emerged as powerful tools for studying the reactivity, selectivity, and stability of chemical and biological systems. A Python-based module, PyGlobal has been developed for systematically parsing a typical Gaussian outfile and extracting the relevant energies of the HOMO and LUMO. Corresponding global reactivity descriptors are further calculated and the data is saved into a spreadsheet compatible with applications like Microsoft Excel and LibreOffice. The efficiency of the module has been accounted by measuring the time interval for randomly selected Gaussian outfiles for 1000 molecules. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
[Waiting time for the first colposcopic examination in women with abnormal Papanicolaou test].
Nascimento, Maria Isabel do; Rabelo, Irene Machado Moraes Alvarenga; Cardoso, Fabrício Seabra Polidoro; Musse, Ricardo Neif Vieira
2015-08-01
To evaluate the waiting times before obtaining the first colposcopic examination for women with abnormal Papanicolaou smears. Retrospective cohort study conducted on patients who required a colposcopic examination to clarify an abnormal pap test, between 2002 January and 2008 August, in a metropolitan region of Brazil. The waiting times were defined as: Total Waiting Time (interval between the date of the pap test result and the date of the first colposcopic examination); Partial A Waiting Time (interval between the date of the pap test result and the date of referral); Partial B Waiting Time (interval between the date of referral and the date of the first colposcopic examination). Means, medians, relative and absolute frequencies were calculated. The Kruskal-Wallis test and Pearson's chi-square test were used to determine statistical significance. A total of 1,544 women with mean of age of 34 years (SD=12.6 years) were analyzed. Most of them had access to colposcopic examination within 30 days (65.8%) or 60 days (92.8%) from referral. Mean Total Waiting Time, Partial A Waiting Time, and Partial B Waiting Time were 94.5 days (SD=96.8 days), 67.8 days (SD=95.3 days) and 29.2 days (SD=35.1 days), respectively. A large part of the women studied had access to colposcopic examination within 60 days after referral, but Total waiting time was long. Measures to reduce the waiting time for obtaining the first colposcopic examination can help to improve the quality of care in the context of cervical cancer control in the region, and ought to be addressed at the phase between the date of the pap test results and the date of referral to the teaching hospital.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Hyun-Ju; Chung, Chin-Wook, E-mail: joykang@hanyang.ac.kr; Choi, Hyeok
A modified central difference method (MCDM) is proposed to obtain the electron energy distribution functions (EEDFs) in single Langmuir probes. Numerical calculation of the EEDF with MCDM is simple and has less noise. This method provides the second derivatives at a given point as the weighted average of second order central difference derivatives calculated at different voltage intervals, weighting each by the square of the interval. In this paper, the EEDFs obtained from MCDM are compared to those calculated via the averaged central difference method. It is found that MCDM effectively suppresses the noises in the EEDF, while the samemore » number of points are used to calculate of the second derivative.« less
Wiese, Steffen; Teutenberg, Thorsten; Schmidt, Torsten C
2012-01-27
In the present work it is shown that the linear elution strength (LES) model which was adapted from temperature-programming gas chromatography (GC) can also be employed for systematic method development in high-temperature liquid chromatography (HT-HPLC). The ability to predict isothermal retention times based on temperature-gradient as well as isothermal input data was investigated. For a small temperature interval of ΔT=40°C, both approaches result in very similar predictions. Average relative errors of predicted retention times of 2.7% and 1.9% were observed for simulations based on isothermal and temperature-gradient measurements, respectively. Concurrently, it was investigated whether the accuracy of retention time predictions of segmented temperature gradients can be further improved by temperature dependent calculation of the parameter S(T) of the LES relationship. It was found that the accuracy of retention time predictions of multi-step temperature gradients can be improved to around 1.5%, if S(T) was also calculated temperature dependent. The adjusted experimental design making use of four temperature-gradient measurements was applied for systematic method development of selected food additives by high-temperature liquid chromatography. Method development was performed within a temperature interval from 40°C to 180°C using water as mobile phase. Two separation methods were established where selected food additives were baseline separated. In addition, a good agreement between simulation and experiment was observed, because an average relative error of predicted retention times of complex segmented temperature gradients less than 5% was observed. Finally, a schedule of recommendations to assist the practitioner during systematic method development in high-temperature liquid chromatography was established. Copyright © 2011 Elsevier B.V. All rights reserved.
A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies
2014-01-01
Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686
A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.
Kottas, Martina; Kuss, Oliver; Zapf, Antonia
2014-02-19
The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.
Gruner, S. V.; Slone, D.H.; Capinera, J.L.; Turco, M. P.
2017-01-01
Chrysomya megacephala (Fabricius) is a forensically important fly that is found throughout the tropics and subtropics. We calculated the accumulated development time and transition points for each life stage from eclosion to adult emergence at five constant temperatures: 15, 20, 25, 30, and 35 °C. For each transition, the 10th, 50th, and 90th percentiles were calculated with a logistic linear model. The mean transition times and % survivorship were determined directly from the raw laboratory data. Development times of C. megacephala were compared with that of two other closely related species, Chrysomya rufifacies (Macquart) and Phormia regina (Meigen). Ambient and larval mass temperatures were collected from field studies conducted from 2001–2004. Field study data indicated that adult fly activity was reduced at lower ambient temperatures, but once a larval mass was established, heat generation occurred. These development times and durations can be used for estimation of a postmortem interval (PMI).
NASA Technical Reports Server (NTRS)
El-Alaoui, M.; Ashour-Abdalla, M.; Raeder, J.; Frank, L. A.; Paterson, W. R.
1998-01-01
In this study we investigate the transport of H+ ions that made up the complex ion distribution function observed by the Geotail spacecraft at 0740 UT on November 24, 1996. This ion distribution function, observed by Geotail at approximately 20 R(sub E) downtail, was used to initialize a time-dependent large-scale kinetic (LSK) calculation of the trajectories of 75,000 ions forward in time. Time-dependent magnetic and electric fields were obtained from a global magnetohydrodynamic (MHD) simulation of the magnetosphere and its interaction with the solar wind and the interplanetary magnetic field (IMF) as observed during the interval of the observation of the distribution function. Our calculations indicate that the particles observed by Geotail were scattered across the equatorial plane by the multiple interactions with the current sheet and then convected sunward. They were energized by the dawn-dusk electric field during their transport from Geotail location and ultimately were lost at the ionospheric boundary or into the magnetopause.
When Will It Be ...?: U.S. Naval Observatory Sidereal Time and Julian Date Calculators
NASA Astrophysics Data System (ADS)
Chizek Frouard, Malynda R.; Lesniak, Michael V.; Bartlett, Jennifer L.
2017-01-01
Sidereal time and Julian date are two values often used in observational astronomy that can be tedious to calculate. Fortunately, the U.S. Naval Observatory (USNO) has redesigned its on-line Sidereal Time and Julian Date (JD) calculators to provide data through an Application Programming Interface (API). This flexible interface returns dates and times in JavaScript Object Notation (JSON) that can be incorporated into third-party websites or applications.Via the API, Sidereal Time can be obtained for any location on Earth for any date occurring in the current, previous, or subsequent year. Up to 9999 iterations of sidereal time data with intervals from 1 second to 1095 days can be generated, as long as the data doesn’t extend past the date limits. The API provides the Gregorian calendar date and time (in UT1), Greenwich Mean Sidereal Time, Greenwich Apparent Sidereal Time, Local Mean Sidereal Time, Local Apparent Sidereal Time, and the Equation of the Equinoxes.Julian Date can be converted to calendar date, either Julian or Gregorian as appropriate, for any date between JD 0 (January 1, 4713 BCE proleptic Julian) and JD 5373484 (December 31, 9999 CE Gregorian); the reverse calendar date to Julian Date conversion is also available. The calendar date and Julian Date are returned for all API requests; the day of the week is also returned for Julian Date to calendar date conversions.On-line documentation for using all USNO API-enabled calculators, including sample calls, is available (http://aa.usno.navy.mil/data/docs/api.php).For those who prefer using traditional data input forms, Sidereal Time can still be accessed at http://aa.usno.navy.mil/data/docs/siderealtime.php, and the Julian Date Converter at http://aa.usno.navy.mil/data/docs/JulianDate.php.
Identifying causes of laboratory turnaround time delay in the emergency department.
Jalili, Mohammad; Shalileh, Keivan; Mojtahed, Ali; Mojtahed, Mohammad; Moradi-Lakeh, Maziar
2012-12-01
Laboratory turnaround time (TAT) is an important determinant of patient stay and quality of care. Our objective is to evaluate laboratory TAT in our emergency department (ED) and to generate a simple model for identifying the primary causes for delay. We measured TATs of hemoglobin, potassium, and prothrombin time tests requested in the ED of a tertiary-care, metropolitan hospital during a consecutive one-week period. The time of different steps (physician order, nurse registration, blood-draw, specimen dispatch from the ED, specimen arrival at the laboratory, and result availability) in the test turnaround process were recorded and the intervals between these steps (order processing, specimen collection, ED waiting, transit, and within-laboratory time) and total TAT were calculated. Median TATs for hemoglobin and potassium were compared with those of the 1990 Q-Probes Study (25 min for hemoglobin and 36 min for potassium) and its recommended goals (45 min for 90% of tests). Intervals were compared according to the proportion of TAT they comprised. Median TATs (170 min for 132 hemoglobin tests, 225 min for 172 potassium tests, and 195.5 min for 128 prothrombin tests) were drastically longer than Q-Probes reported and recommended TATs. The longest intervals were ED waiting time and order processing. Laboratory TAT varies among institutions, and data are sparse in developing countries. In our ED, actions to reduce ED waiting time and order processing are top priorities. We recommend utilization of this model by other institutions in settings with limited resources to identify their own priorities for reducing laboratory TAT.
Love, Jeffrey J.
2012-01-01
Statistical analysis is made of rare, extreme geophysical events recorded in historical data -- counting the number of events $k$ with sizes that exceed chosen thresholds during specific durations of time $\\tau$. Under transformations that stabilize data and model-parameter variances, the most likely Poisson-event occurrence rate, $k/\\tau$, applies for frequentist inference and, also, for Bayesian inference with a Jeffreys prior that ensures posterior invariance under changes of variables. Frequentist confidence intervals and Bayesian (Jeffreys) credibility intervals are approximately the same and easy to calculate: $(1/\\tau)[(\\sqrt{k} - z/2)^{2},(\\sqrt{k} + z/2)^{2}]$, where $z$ is a parameter that specifies the width, $z=1$ ($z=2$) corresponding to $1\\sigma$, $68.3\\%$ ($2\\sigma$, $95.4\\%$). If only a few events have been observed, as is usually the case for extreme events, then these "error-bar" intervals might be considered to be relatively wide. From historical records, we estimate most likely long-term occurrence rates, 10-yr occurrence probabilities, and intervals of frequentist confidence and Bayesian credibility for large earthquakes, explosive volcanic eruptions, and magnetic storms.
NASA Technical Reports Server (NTRS)
Susko, M.; Kaufman, J. W.
1973-01-01
The percentage levels of wind speed differences are presented computed from sequential FPS-16 radar/Jimsphere wind profiles. The results are based on monthly profiles obtained from December 1964 to July 1970 at Cape Kennedy, Florida. The profile sequences contain a series of three to ten Jimspheres released at approximately 1.5-hour intervals. The results given are the persistence analysis of wind speed difference at 1.5-hour intervals to a maximum time interval of 12 hours. The monthly percentage of wind speed differences and the annual percentage of wind speed differences are tabulated. The percentage levels are based on the scalar wind speed changes calculated over an altitude interval of approximately 50 meters and printed out every 25 meters as a function of initial wind speed within each five-kilometer layer from near sea level to 20 km. In addition, analyses were made of the wind speed difference for the 0.2 to 1 km layer as an aid for studies associated with take-off and landing of the space shuttle.
Photoinduced diffusion molecular transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozenbaum, Viktor M., E-mail: vik-roz@mail.ru, E-mail: litrakh@gmail.com; Dekhtyar, Marina L.; Lin, Sheng Hsien
2016-08-14
We consider a Brownian photomotor, namely, the directed motion of a nanoparticle in an asymmetric periodic potential under the action of periodic rectangular resonant laser pulses which cause charge redistribution in the particle. Based on the kinetics for the photoinduced electron redistribution between two or three energy levels of the particle, the time dependence of its potential energy is derived and the average directed velocity is calculated in the high-temperature approximation (when the spatial amplitude of potential energy fluctuations is small relative to the thermal energy). The thus developed theory of photoinduced molecular transport appears applicable not only to conventionalmore » dichotomous Brownian motors (with only two possible potential profiles) but also to a much wider variety of molecular nanomachines. The distinction between the realistic time dependence of the potential energy and that for a dichotomous process (a step function) is represented in terms of relaxation times (they can differ on the time intervals of the dichotomous process). As shown, a Brownian photomotor has the maximum average directed velocity at (i) large laser pulse intensities (resulting in short relaxation times on laser-on intervals) and (ii) excited state lifetimes long enough to permit efficient photoexcitation but still much shorter than laser-off intervals. A Brownian photomotor with optimized parameters is exemplified by a cylindrically shaped semiconductor nanocluster which moves directly along a polar substrate due to periodically photoinduced dipole moment (caused by the repetitive excited electron transitions to a non-resonant level of the nanocylinder surface impurity).« less
Zhang, Gao-Ming; Guo, Xu-Xiao; Ma, Xiao-Bo; Zhang, Guo-Ming
2016-12-12
BACKGROUND The aim of this study was to calculate 95% reference intervals and double-sided limits of serum alpha-fetoprotein (AFP) and carcinoembryonic antigen (CEA) according to the CLSI EP28-A3 guideline. MATERIAL AND METHODS Serum AFP and CEA values were measured in samples from 26 000 healthy subjects in the Shuyang area receiving general health checkups. The 95% reference intervals and upper limits were calculated by using MedCalc. RESULTS We provided continuous reference intervals from 20 years old to 90 years old for AFP and CEA. The reference intervals were: AFP, 1.31-7.89 ng/ml (males) and 1.01-7.10 ng/ml (females); CEA, 0.51-4.86 ng/ml (males) and 0.35-3.45ng/ml (females). AFP and CEA were significantly positively correlated with age in both males (r=0.196 and r=0.198) and females (r=0.121 and r=0.197). CONCLUSIONS Different races or populations and different detection systems may result in different reference intervals for AFP and CEA. Continuous reference intervals of age changes are more accurate than age groups.
Restricted Collision List method for faster Direct Simulation Monte-Carlo (DSMC) collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macrossan, Michael N., E-mail: m.macrossan@uq.edu.au
The ‘Restricted Collision List’ (RCL) method for speeding up the calculation of DSMC Variable Soft Sphere collisions, with Borgnakke–Larsen (BL) energy exchange, is presented. The method cuts down considerably on the number of random collision parameters which must be calculated (deflection and azimuthal angles, and the BL energy exchange factors). A relatively short list of these parameters is generated and the parameters required in any cell are selected from this list. The list is regenerated at intervals approximately equal to the smallest mean collision time in the flow, and the chance of any particle re-using the same collision parameters inmore » two successive collisions is negligible. The results using this method are indistinguishable from those obtained with standard DSMC. The CPU time saving depends on how much of a DSMC calculation is devoted to collisions and how much is devoted to other tasks, such as moving particles and calculating particle interactions with flow boundaries. For 1-dimensional calculations of flow in a tube, the new method saves 20% of the CPU time per collision for VSS scattering with no energy exchange. With RCL applied to rotational energy exchange, the CPU saving can be greater; for small values of the rotational collision number, for which most collisions involve some rotational energy exchange, the CPU may be reduced by 50% or more.« less
Dolman, Bronwyn; Verrall, Geoffrey; Reid, Iain
2014-01-01
Summary Of the hamstring muscle group the biceps femoris muscle is the most commonly injured muscle in sports requiring interval sprinting. The reason for this observation is unknown. The objective of this study was to calculate the forces of all three hamstring muscles, relative to each other, during a lengthening contraction to assess for any differences that may help explain the biceps femoris predilection for injury during interval sprinting. To calculate the displacement of each individual hamstring muscle previously performed studies on cadaveric anatomical data and hamstring kinematics during sprinting were used. From these displacement calculations for each individual hamstring muscle physical principles were then used to deduce the proportion of force exerted by each individual hamstring muscle during a lengthening muscle contraction. These deductions demonstrate that the biceps femoris muscle is required to exert proportionally more force in a lengthening muscle contraction relative to the semimembranosus and semitendinosus muscles primarily as a consequence of having to lengthen over a greater distance within the same time frame. It is hypothesized that this property maybe a factor in the known observation of the increased susceptibility of the biceps femoris muscle to injury during repeated sprints where recurrent higher force is required. PMID:25506583
Dolman, Bronwyn; Verrall, Geoffrey; Reid, Iain
2014-07-01
Of the hamstring muscle group the biceps femoris muscle is the most commonly injured muscle in sports requiring interval sprinting. The reason for this observation is unknown. The objective of this study was to calculate the forces of all three hamstring muscles, relative to each other, during a lengthening contraction to assess for any differences that may help explain the biceps femoris predilection for injury during interval sprinting. To calculate the displacement of each individual hamstring muscle previously performed studies on cadaveric anatomical data and hamstring kinematics during sprinting were used. From these displacement calculations for each individual hamstring muscle physical principles were then used to deduce the proportion of force exerted by each individual hamstring muscle during a lengthening muscle contraction. These deductions demonstrate that the biceps femoris muscle is required to exert proportionally more force in a lengthening muscle contraction relative to the semimembranosus and semitendinosus muscles primarily as a consequence of having to lengthen over a greater distance within the same time frame. It is hypothesized that this property maybe a factor in the known observation of the increased susceptibility of the biceps femoris muscle to injury during repeated sprints where recurrent higher force is required.
Confidence limit calculation for antidotal potency ratio derived from lethal dose 50
Manage, Ananda; Petrikovics, Ilona
2013-01-01
AIM: To describe confidence interval calculation for antidotal potency ratios using bootstrap method. METHODS: We can easily adapt the nonparametric bootstrap method which was invented by Efron to construct confidence intervals in such situations like this. The bootstrap method is a resampling method in which the bootstrap samples are obtained by resampling from the original sample. RESULTS: The described confidence interval calculation using bootstrap method does not require the sampling distribution antidotal potency ratio. This can serve as a substantial help for toxicologists, who are directed to employ the Dixon up-and-down method with the application of lower number of animals to determine lethal dose 50 values for characterizing the investigated toxic molecules and eventually for characterizing the antidotal protections by the test antidotal systems. CONCLUSION: The described method can serve as a useful tool in various other applications. Simplicity of the method makes it easier to do the calculation using most of the programming software packages. PMID:25237618
An institutional study of time delays for symptomatic carotid endarterectomy.
Charbonneau, Philippe; Bonaventure, Paule Lessard; Drudi, Laura M; Beaudoin, Nathalie; Blair, Jean-François; Elkouri, Stéphane
2016-12-01
The aim of this study was to assess time delays between first cerebrovascular symptoms and carotid endarterectomy (CEA) at a single center and to systematically evaluate causes of these delays. Consecutive adult patients who underwent CEAs between January 2010 and September 2011 at a single university-affiliated center (Centre Hospitalier de l'Université Montréal-Hôtel-Dieu Hospital, Montreal) were identified from a clinical database and operative records. Covariates of interest were extracted from electronic medical records. Timing and nature of the first cerebrovascular symptoms were also documented. The first medical contact and pathway of referral were also assessed. When possible, the ABCD 2 score (age, blood pressure, clinical features, duration of symptoms, and diabetes) was calculated to calculate further risk of stroke. The nonparametric Wilcoxon test was used to assess differences in time intervals between two variables. The Kruskal-Wallis test was used to assess differences in time intervals in comparing more than two variables. A multivariate linear regression analysis was performed using covariates that were determined to be statistically significant in our sensitivity analyses. The cohort consisted of 111 patients with documented symptomatic carotid stenosis undergoing surgical intervention. Thirty-nine percent of all patients were operated on within 2 weeks from the first cerebrovascular symptoms. The median time between the occurrence of the first neurologic symptom and the CEA procedure was 25 (interquartile range [IQR], 11-85) days. The patient-dependent delay, defined as the median delay between the first neurologic symptom and the first medical contact, was 1 (IQR, 0-14) day. The medical-dependent delay was defined as the time interval between the first medical contact and CEA. This included the delay between the first medical contact and the request for surgery consultation (median, 3 [IQR, 1-10] days). The multivariate regression model demonstrated that the emergency physician as referral source (P = .0002) was statistically significant for reducing CEA delay. Patients who were investigated as an outpatient (P = .02), first medical contact with a general practitioner (P = .0002), and hospital center I as referral center (P = .045) were also found to be statistically significant to extend CEA delay when the model was adjusted over all covariates. In this center, there was no correlation between ABCD 2 risk score and waiting time for surgery. The majority of our cohort falls short of the recommended 2-week interval to perform CEA. Factors contributing to reduced CEA delay were presentation to an emergency department, in-patient investigations, and a stroke center where a vascular surgeon is available. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.
Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P
2014-01-01
To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.
Advanced analysis of finger-tapping performance: a preliminary study.
Barut, Cağatay; Kızıltan, Erhan; Gelir, Ethem; Köktürk, Fürüzan
2013-06-01
The finger-tapping test is a commonly employed quantitative assessment tool used to measure motor performance in the upper extremities. This task is a complex motion that is affected by external stimuli, mood and health status. The complexity of this task is difficult to explain with a single average intertap-interval value (time difference between successive tappings) which only provides general information and neglects the temporal effects of the aforementioned factors. This study evaluated the time course of average intertap-interval values and the patterns of variation in both the right and left hands of right-handed subjects using a computer-based finger-tapping system. Cross sectional study. Thirty eight male individuals aged between 20 and 28 years (Mean±SD = 22.24±1.65) participated in the study. Participants were asked to perform single-finger-tapping test for 10 seconds of test period. Only the results of right-handed (RH) 35 participants were considered in this study. The test records the time of tapping and saves data as the time difference between successive tappings for further analysis. The average number of tappings and the temporal fluctuation patterns of the intertap-intervals were calculated and compared. The variations in the intertap-interval were evaluated with the best curve fit method. An average tapping speed or tapping rate can reliably be defined for a single-finger tapping test by analysing the graphically presented data of the number of tappings within the test period. However, a different presentation of the same data, namely the intertap-interval values, shows temporal variation as the number of tapping increases. Curve fitting applications indicate that the variation has a biphasic nature. The measures obtained in this study reflect the complex nature of the finger-tapping task and are suggested to provide reliable information regarding hand performance. Moreover, the equation reflects both the variations in and the general patterns associated with the task.
Nonlinear ARMA models for the D(st) index and their physical interpretation
NASA Technical Reports Server (NTRS)
Vassiliadis, D.; Klimas, A. J.; Baker, D. N.
1996-01-01
Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.
Cardioventilatory coupling in preterm and term infants: effect of position and sleep state.
Elder, Dawn E; Larsen, Peter D; Galletly, Duncan C; Campbell, Angela J
2010-11-30
This study documented the effect of position on cardioventilatory coupling (CVC), the triggering of inspiratory onset by a preceding heartbeat, in infants. Cardiorespiratory signals and corresponding oxygen saturation (SpO(2)) were downloaded from Quiet Sleep (QS) and Active Sleep (AS) in prone and supine from preterm (PT) and term (T) infants. Inspiratory onsets (I) and timing of the corresponding ECG R wave were determined and R-R and R-I intervals calculated. The RI(-1) interval (time between inspiration and the preceding R wave) dispersion was measured using proportional Shannon Entropy of the RI(-1) interval (SH(α)), to provide a quantitative measure of CVC. CVC was more frequently seen in QS in PT (p=0.002) and T (p=0.02) infants but not influenced by position (p=0.71, p=0.46). CVC correlated with SpO(2) in PT (r=-0.230, p=0.03) but not T infants (r=0.085, p=0.34). These data imply an augmentation of cardiac influence on ventilatory rhythm in infants in QS. In preterm infants CVC may have a role in supporting oxygenation. Copyright © 2010 Elsevier B.V. All rights reserved.
Migration of Trans-Neptunian Objects to a Near-Earth Space
NASA Technical Reports Server (NTRS)
Ipatov, S. I.; Mather, J. C.; Oegerle, William (Technical Monitor)
2002-01-01
Our estimates of the migration of trans-Neptunian objects (TNOs) to a near-Earth space are based on the results of investigations of orbital evolution of TNOs and Jupiter-crossing objects (JCOs). The orbital evolution of TNOs was considered in many papers. Recently we investigated the evolution for intervals of at least 5-10 Myr of 2500 JCOs under the gravitational influence of all planets, except for Mercury and Pluto (without dissipative factors). In the first series we considered N=2000 orbits near the orbits of 30 real Jupiter-family comets with period P(sub alpha)less than 10 yr, and in the second series we took N=500 orbits close to the orbit of Comet 10P Tempel 2 (alpha=3.1 AU, e=0.53, i=12 deg). We calculated the probabilities of collisions of objects with the terrestrial planets, using orbital elements obtained with a step equal to 500 yr, and then summarized the results for all time intervals and all bodies, obtaining the total probability P(sub sigma) of collisions with a planet and the total time interval T(sub sigma) during which perihelion distance q of bodies was less than a semimajor axis of the planet.
Periodicity in extinction and the problem of catastrophism in the history of life
NASA Technical Reports Server (NTRS)
Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)
1989-01-01
The hypothesis that extinction events have recurred periodically over the last quarter billion years is greatly strengthened by new data on the stratigraphic ranges of marine animal genera. In the interval from the Permian to Recent, these data encompass some 13,000 generic extinctions, providing a more sensitive indicator of species-level extinctions than previously used familial data. Extinction time series computed from the generic data display nine strong peaks that are nearly uniformly spaced at 26 Ma intervals over the last 270 Ma. Most of these peaks correspond to extinction events recognized in more detailed, if limited, biostratigraphic studies. These new data weaken or negate most arguments against periodicity, which have involved criticisms of the taxonomic data base, sampling intervals, chronometric time scales, and statistical methods used in previous analyses. The criticisms are reviewed in some detail and various new calculations and simulations, including one assessing the effects of paraphyletic taxa, are presented. Although the new data strengthen the case for periodicity, they offer little new insight into the deriving mechanism behind the pattern. However, they do suggest that many of the periodic events may not have been catastrophic, occurring instead over several stratigraphic stages or substages.
Kuiper, Gerhardus J A J M; Houben, Rik; Wetzels, Rick J H; Verhezen, Paul W M; Oerle, Rene van; Ten Cate, Hugo; Henskens, Yvonne M C; Lancé, Marcus D
2017-11-01
Low platelet counts and hematocrit levels hinder whole blood point-of-care testing of platelet function. Thus far, no reference ranges for MEA (multiple electrode aggregometry) and PFA-100 (platelet function analyzer 100) devices exist for low ranges. Through dilution methods of volunteer whole blood, platelet function at low ranges of platelet count and hematocrit levels was assessed on MEA for four agonists and for PFA-100 in two cartridges. Using (multiple) regression analysis, 95% reference intervals were computed for these low ranges. Low platelet counts affected MEA in a positive correlation (all agonists showed r 2 ≥ 0.75) and PFA-100 in an inverse correlation (closure times were prolonged with lower platelet counts). Lowered hematocrit did not affect MEA testing, except for arachidonic acid activation (ASPI), which showed a weak positive correlation (r 2 = 0.14). Closure time on PFA-100 testing was inversely correlated with hematocrit for both cartridges. Regression analysis revealed different 95% reference intervals in comparison with originally established intervals for both MEA and PFA-100 in low platelet or hematocrit conditions. Multiple regression analysis of ASPI and both tests on the PFA-100 for combined low platelet and hematocrit conditions revealed that only PFA-100 testing should be adjusted for both thrombocytopenia and anemia. 95% reference intervals were calculated using multiple regression analysis. However, coefficients of determination of PFA-100 were poor, and some variance remained unexplained. Thus, in this pilot study using (multiple) regression analysis, we could establish reference intervals of platelet function in anemia and thrombocytopenia conditions on PFA-100 and in thrombocytopenia conditions on MEA.
Hepatitis A outbreak on a floating restaurant in Florida, 1986.
Lowry, P W; Levine, R; Stroup, D F; Gunn, R A; Wilder, M H; Konigsberg, C
1989-01-01
In April and May 1986, the largest reported foodborne outbreak of hepatitis A in Florida state history occurred among patrons and employees of a floating restaurant. A total of 103 cases (97 patrons and six employees) were identified. The exposure period lasted 31 days (March 20-April 19), making this the most prolonged hepatitis A outbreak to occur in a restaurant that to date has been reported to the Centers for Disease Control. The exposure period was divided into time intervals (peak, early, late, and total) for calculation of food-specific attack rates. The authors showed that green salad was an important vehicle of transmission for each phase of the exposure period, with the highest adjusted odds ratio for the three-day peak exposure interval (March 28-30), 6.8 (p = 0.001). Non-salad pantry items and mixed bar drinks were also identified as vehicles of transmission; both were more important during the early interval of the exposure period than during the late interval. Two of six infected employees worked in the pantry and may have sequentially infected patrons. Though rare, this outbreak suggests that hepatitis A infection among employees may allow for transmission to patrons for prolonged periods of time. Prevention of such outbreaks requires prompt reporting of ill patrons with rapid identification of infected employees and correction of food handling practices.
Sample size calculation for studies with grouped survival data.
Li, Zhiguo; Wang, Xiaofei; Wu, Yuan; Owzar, Kouros
2018-06-10
Grouped survival data arise often in studies where the disease status is assessed at regular visits to clinic. The time to the event of interest can only be determined to be between two adjacent visits or is right censored at one visit. In data analysis, replacing the survival time with the endpoint or midpoint of the grouping interval leads to biased estimators of the effect size in group comparisons. Prentice and Gloeckler developed a maximum likelihood estimator for the proportional hazards model with grouped survival data and the method has been widely applied. Previous work on sample size calculation for designing studies with grouped data is based on either the exponential distribution assumption or the approximation of variance under the alternative with variance under the null. Motivated by studies in HIV trials, cancer trials and in vitro experiments to study drug toxicity, we develop a sample size formula for studies with grouped survival endpoints that use the method of Prentice and Gloeckler for comparing two arms under the proportional hazards assumption. We do not impose any distributional assumptions, nor do we use any approximation of variance of the test statistic. The sample size formula only requires estimates of the hazard ratio and survival probabilities of the event time of interest and the censoring time at the endpoints of the grouping intervals for one of the two arms. The formula is shown to perform well in a simulation study and its application is illustrated in the three motivating examples. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Shefer, V. A.
2015-12-01
We examine intermediate perturbed orbit proposed by the author previously, defined from the three position vectors of a small celestial body. It is shown theoretically, that at a small reference time interval covering the body positions the approximation accuracy of real motion by this orbit corresponds approximately to the fourth order of tangency. The smaller reference interval of time, the better this correspondence. Laws of variation of the methodical errors in constructing intermediate orbit subject to the length of reference time interval are deduced. According to these laws, the convergence rate of the method to the exact solution (upon reducing the reference interval of time) in the general case is higher by three orders of magnitude than in the case of conventional methods using Keplerian unperturbed orbit. The considered orbit is among the most accurate in set of orbits of their class determined by the order of tangency. The theoretical results are validated by numerical examples. The work was supported by the Ministry of Education and Science of the Russian Federation, project no. 2014/223(1567).
NASA Astrophysics Data System (ADS)
Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert
2017-03-01
The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.
Impacts and the origin of life
NASA Technical Reports Server (NTRS)
Oberbeck, Verne R.; Fogleman, Guy
1989-01-01
Consideration is given to the estimate of Maher and Stevenson (1988) of the time at which life could have developed on earth through chemical evolution within a time interval between impact events, assuming chemical or prebiotic evolution times of 100,000 to 10,000,000 yrs. An error in the equations used to determine the time periods between impact events in estimating this time is noted. A revised equation is presented and used to calculate the point in time at which impact events became infrequent enough for life to form. By using this equation, the finding of Maher and Stevenson that life could have first originated between 4,100 and 4,300 million years ago is changed to 3,700 to 4,000 million years ago.
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
Cardiovascular response to acute stress in freely moving rats: time-frequency analysis.
Loncar-Turukalo, Tatjana; Bajic, Dragana; Japundzic-Zigon, Nina
2008-01-01
Spectral analysis of cardiovascular series is an important tool for assessing the features of the autonomic control of the cardiovascular system. In this experiment Wistar rats ecquiped with intraarterial catheter for blood pressure (BP) recording were exposed to stress induced by blowing air. The problem of non stationary data was overcomed applying the Smoothed Pseudo Wigner Villle (SPWV) time-frequency distribution. Spectral analysis was done before stress, during stress, immediately after stress and later in recovery. The spectral indices were calculated for both systolic blood pressure (SBP) and pulse interval (PI) series. The time evolution of spectral indices showed perturbed sympathovagal balance.
[Embryo volume estimated by three-dimensional ultrasonography at seven to ten weeks of pregnancy].
Filho, João Bortoletti; Nardozza, Luciano Marcondes Machado; Araujo Júnior, Edward; Rôlo, Líliam Cristine; Nowak, Paulo Martin; Moron, Antonio Fernandes
2008-10-01
to evaluate the embryo's volume (EV) between the seventh and the tenth gestational week, through tridimensional ultrasonography. a transversal study with 63 normal pregnant women between the seventh and the tenth gestational week. The ultrasonographical exams have been performed with a volumetric abdominal transducer. Virtual Organ Computer-aided Analysis (VOCAL) has been used to calculate EV, with a rotation angle of 12 masculine and a delimitation of 15 sequential slides. The average, median, standard deviation and maximum and minimum values have been calculated for the EV in all the gestational ages. A dispersion graphic has been drawn to assess the correlation between EV and the craniogluteal length (CGL), the adjustment being done by the determination coefficient (R(2)). To determine EV's reference intervals as a function of the CGL, the following formula was used: percentile=EV+K versus SD, with K=1.96. CGL has varied from 9.0 to 39.7 mm, with an average of 23.9 mm (+/-7.9 mm), while EV has varied from 0.1 to 7.6 cm(3), with an average of 2.7 cm(3) (+/-3.2 cm(3)). EV was highly correlated to CGL, the best adjustment being obtained with quadratic regression (EV=0.2-0.055 versus CGL+0.005 versus CGL(2); R(2)=0.8). The average EV has varied from 0.1 (-0.3 to 0.5 cm(3)) to 6.7 cm(3) (3.8 to 9.7 cm(3)) within the interval of 9 to 40 mm of CGL. EV has increased 67 times in this interval, while CGL, only 4.4 times. EV is a more sensitive parameter than CGL to evaluate embryo growth between the seventh and the tenth week of gestation.
Detection of density dependence requires density manipulations and calculation of lambda.
Fowler, N L; Overath, R Deborah; Pease, Craig M
2006-03-01
To investigate density-dependent population regulation in the perennial bunchgrass Bouteloua rigidiseta, we experimentally manipulated density by removing adults or adding seeds to replicate quadrats in a natural population for three annual intervals. We monitored the adjacent control quadrats for 14 annual intervals. We constructed a population projection matrix for each quadrat in each interval, calculated lambda, and did a life table response experiment (LTRE) analysis. We tested the effects of density upon lambda by comparing experimental and control quadrats, and by an analysis of the 15-year observational data set. As measured by effects on lambda and on N(t+1/Nt in the experimental treatments, negative density dependence was strong: the population was being effectively regulated. The relative contributions of different matrix elements to treatment effect on lambda differed among years and treatments; overall the pattern was one of small contributions by many different life cycle stages. In contrast, density dependence could not be detected using only the observational (control quadrats) data, even though this data set covered a much longer time span. Nor did experimental effects on separate matrix elements reach statistical significance. These results suggest that ecologists may fail to detect density dependence when it is present if they have only descriptive, not experimental, data, do not have data for the entire life cycle, or analyze life cycle components separately.
Oikonomidis, Ioannis L; Kiosis, Evangelos A; Brozos, Christos N; Kritsepi-Konstantinou, Maria G
2017-03-01
OBJECTIVE To establish reference intervals for serum reactive oxygen metabolites (ROMs), biological antioxidant potential (BAP), and oxidative stress index (OSi) in adult rams by use of controlled preanalytic and analytic procedures. ANIMALS 123 healthy 1- to 4-year-old rams of 2 Greek breeds (Chios [n = 62] and Florina [61]). PROCEDURES 4 hours after rams were fed, a blood sample was obtained from each ram, and serum was harvested. Concentrations of ROMs and BAP were measured colorimetrically on a spectrophotometric analyzer. The OSi was calculated as ROMs concentration divided by BAP concentration. Combined and breed-specific reference intervals were calculated by use of nonparametric and robust methods, respectively. Reference intervals were defined as the 2.5th to 97.5th percentiles. RESULTS Reference intervals for ROMs, BAP, and OSi for all rams combined were 65 to 109 Carratelli units, 2,364 to 4,491 μmol/L, and 18.2 to 43.0 Carratelli units/(mmol/L), respectively. Reference intervals of Chios rams for ROMs, BAP, and OSi were 56 to 113 Carratelli units, 2,234 to 4,290 μmol/L, and 12.9 to 38.4 Carratelli units/(mmol/L), respectively. Reference intervals of Florina rams for ROMs, BAP, and OSi were 68 to 111 Carratelli units, 2,337 to 4,363 μmol/L, and 14.1 to 38.1 Carratelli units/(mmol/L), respectively. CONCLUSIONS AND CLINICAL RELEVANCE Reference intervals calculated in this study can be used as a guide for the interpretation of ROMs, BAP, and OSi results in rams and, under appropriate conditions, can be adopted for use by veterinary laboratories.
Khan, Muhammad; Lin, Jie; Liao, Guixiang; Li, Rong; Wang, Baiyao; Xie, Guozhu; Zheng, Jieling; Yuan, Yawei
2017-07-01
Whole brain radiotherapy has been a standard treatment of brain metastases. Stereotactic radiosurgery provides more focal and aggressive radiation and normal tissue sparing but worse local and distant control. This meta-analysis was performed to assess and compare the effectiveness of whole brain radiotherapy alone, stereotactic radiosurgery alone, and their combination in the treatment of brain metastases based on randomized controlled trial studies. Electronic databases (PubMed, MEDLINE, Embase, and Cochrane Library) were searched to identify randomized controlled trial studies that compared treatment outcome of whole brain radiotherapy and stereotactic radiosurgery. This meta-analysis was performed using the Review Manager (RevMan) software (version 5.2) that is provided by the Cochrane Collaboration. The data used were hazard ratios with 95% confidence intervals calculated for time-to-event data extracted from survival curves and local tumor control rate curves. Odds ratio with 95% confidence intervals were calculated for dichotomous data, while mean differences with 95% confidence intervals were calculated for continuous data. Fixed-effects or random-effects models were adopted according to heterogeneity. Five studies (n = 763) were included in this meta-analysis meeting the inclusion criteria. All the included studies were randomized controlled trials. The sample size ranged from 27 to 331. In total 202 (26%) patients with whole brain radiotherapy alone, 196 (26%) patients receiving stereotactic radiosurgery alone, and 365 (48%) patients were in whole brain radiotherapy plus stereotactic radiosurgery group. No significant survival benefit was observed for any treatment approach; hazard ratio was 1.19 (95% confidence interval: 0.96-1.43, p = 0.12) based on three randomized controlled trials for whole brain radiotherapy only compared to whole brain radiotherapy plus stereotactic radiosurgery and hazard ratio was 1.03 (95% confidence interval: 0.82-1.29, p = 0.81) for stereotactic radiosurgery only compared to combined approach. Local control was best achieved when whole brain radiotherapy was combined with stereotactic radiosurgery. Hazard ratio 2.05 (95% confidence interval: 1.36-3.09, p = 0.0006) and hazard ratio 1.84 (95% confidence interval: 1.26-2.70, p = 0.002) were obtained from comparing whole brain radiotherapy only and stereotactic radiosurgery only to whole brain radiotherapy + stereotactic radiosurgery, respectively. No difference in adverse events for treatment difference; odds ratio 1.16 (95% confidence interval: 0.77-1.76, p = 0.48) and odds ratio 0.92 (95% confidence interval: 0.59-1.42, p = 71) for whole brain radiotherapy + stereotactic radiosurgery versus whole brain radiotherapy only and whole brain radiotherapy + stereotactic radiosurgery versus stereotactic radiosurgery only, respectively. Adding stereotactic radiosurgery to whole brain radiotherapy provides better local control as compared to whole brain radiotherapy only and stereotactic radiosurgery only with no difference in radiation related toxicities.
Fajt, Virginia R; Apley, Michael D; Brogden, Kim A; Skogerboe, Terry L; Shostrom, Valerie K; Chin, Ya-Lin
2004-05-01
To examine effects of danofloxacin and tilmicosin on continuously recorded body temperature in beef calves with pneumonia experimentally induced by inoculation of Mannheimia haemolytica. 41 Angus-cross heifers (body weight, 160 to 220 kg) without a recent history of respiratory tract disease or antimicrobial treatment, all from a single ranch. Radiotransmitters were implanted intravaginally in each calf. Pneumonia was induced intrabronchially by use of logarithmic-phase cultures of M. haemolytica. At 21 hours after inoculation, calves were treated with saline (0.9% NaCl) solution, danofloxacin, or tilmicosin. Body temperature was monitored from 66 hours before inoculation until 72 hours after treatment. Area under the curve (AUC) of the temperature-time plot and mean temperature were calculated for 3-hour intervals and compared among treatment groups. The AUCs for 3-hour intervals did not differ significantly among treatment groups for any of the time periods. Analysis of the mean temperature for 3-hour intervals revealed significantly higher temperatures at most time periods for saline-treated calves, compared with temperatures for antimicrobial-treated calves; however, we did not detect significant differences between the danofloxacin- and tilmicosin-treated calves. The circadian rhythm of temperatures before exposure was detected again approximately 48 hours after bacterial inoculation. Danofloxacin and tilmicosin did not differ in their effect on mean body temperature for 3-hour intervals but significantly decreased body temperature, compared with body temperature in saline-treated calves. Normal daily variation in body temperature must be considered in the face of respiratory tract disease during clinical evaluation of feedlot cattle.
NASA Astrophysics Data System (ADS)
Guo, Kun; Mungall, James E.; Smith, Richard S.
2013-09-01
A discrete conductive sphere model in which current paths are constrained to a single planar orientation (the `dipping sphere') is used to calculate the secondary response from Geotech Ltd's VTEM airborne time domain electromagnetic (EM) system. In addition to calculating the time constants of the B-field and dB/dt responses, we focus on the time-constant ratio at a late time interval and compare numerical results with several field examples. For very strong conductors with conductivity above a critical value, both the B-field and dB/dt responses show decreasing values as the conductivity increases. Therefore response does not uniquely define conductivity. However, calculation of time constants for the decay removes the ambiguity and allows discrimination of high and low conductivity targets. A further benefit is gained by comparing the time constants of the B-field and dB/dt decays, which co-vary systematically over a wide range of target conductance. An advantage of calculating time constant ratios is that the ratios are insensitive to the dip and the depth of the targets and are stable across the conductor. Therefore we propose to use their ratio rτ=τB/τdB/dt as a tool to estimate the size and conductivity of mineral deposits. Using the VTEM base frequency, the magnitude of rτ reaches a limiting value of 1.32 for the most highly conductive targets. Interpretations become more complicated in the presence of conductive overburden, which appears to cause the limiting value of rτ to increase to 2 or more.
Subjective versus objective evening chronotypes in bipolar disorder.
Gershon, Anda; Kaufmann, Christopher N; Depp, Colin A; Miller, Shefali; Do, Dennis; Zeitzer, Jamie M; Ketter, Terence A
2018-01-01
Disturbed sleep timing is common in bipolar disorder (BD). However, most research is based upon self-reports. We examined relationships between subjective versus objective assessments of sleep timing in BD patients versus controls. We studied 61 individuals with bipolar I or II disorder and 61 healthy controls. Structured clinical interviews assessed psychiatric diagnoses, and clinician-administered scales assessed current mood symptom severity. For subjective chronotype, we used the Composite Scale of Morningness (CSM) questionnaire, using original and modified (1, ¾, ⅔, and ½ SD below mean CSM score) thresholds to define evening chronotype. Objective chronotype was calculated as the percentage of nights (50%, 66.7%, 75%, or 90% of all nights) with sleep interval midpoints at or before (non-evening chronotype) vs. after (evening chronotype) 04:15:00 (4:15:00a.m.), based on 25-50 days of continuous actigraph data. BD participants and controls differed significantly with respect to CSM mean scores and CSM evening chronotypes using modified, but not original, thresholds. Groups also differed significantly with respect to chronotype based on sleep interval midpoint means, and based on the threshold of 75% of sleep intervals with midpoints after 04:15:00. Subjective and objective chronotypes correlated significantly with one another. Twenty-one consecutive intervals were needed to yield an evening chronotype classification match of ≥ 95% with that made using the 75% of sleep intervals threshold. Limited sample size/generalizability. Subjective and objective chronotype measurements were correlated with one another in participants with BD. Using population-specific thresholds, participants with BD had a later chronotype than controls. Copyright © 2017 Elsevier B.V. All rights reserved.
Makri, Nancy
2014-10-07
The real-time path integral representation of the reduced density matrix for a discrete system in contact with a dissipative medium is rewritten in terms of the number of blips, i.e., elementary time intervals over which the forward and backward paths are not identical. For a given set of blips, it is shown that the path sum with respect to the coordinates of all remaining time points is isomorphic to that for the wavefunction of a system subject to an external driving term and thus can be summed by an inexpensive iterative procedure. This exact decomposition reduces the number of terms by a factor that increases exponentially with propagation time. Further, under conditions (moderately high temperature and/or dissipation strength) that lead primarily to incoherent dynamics, the "fully incoherent limit" zero-blip term of the series provides a reasonable approximation to the dynamics, and the blip series converges rapidly to the exact result. Retention of only the blips required for satisfactory convergence leads to speedup of full-memory path integral calculations by many orders of magnitude.
Bae, Jong-Myon; Shin, Sang Yop; Kim, Eun Hee
2015-01-01
Purpose This retrospective cohort study was conducted to estimate the optimal interval for gastric cancer screening in Korean adults with initial negative screening results. Materials and Methods This study consisted of voluntary Korean screenees aged 40 to 69 years who underwent subsequent screening gastroscopies after testing negative in the baseline screening performed between January 2007 and December 2011. A new case was defined as the presence of gastric cancer cells in biopsy specimens obtained upon gastroscopy. The follow-up periods were calculated during the months between the date of baseline screening gastroscopy and positive findings upon subsequent screenings, stratified by sex and age group. The mean sojourn time (MST) for determining the screening interval was estimated using the prevalence/incidence ratio. Results Of the 293,520 voluntary screenees for the gastric cancer screening program, 91,850 (31.29%) underwent subsequent screening gastroscopies between January 2007 and December 2011. The MSTs in men and women were 21.67 months (95% confidence intervals [CI], 17.64 to 26.88 months) and 15.14 months (95% CI, 9.44 to 25.85 months), respectively. Conclusion These findings suggest that the optimal interval for subsequent gastric screening in both men and women is 24 months, supporting the 2-year interval recommended by the nationwide gastric cancer screening program. PMID:25687874
Automated Optical Meteor Fluxes and Preliminary Results of Major Showers
NASA Technical Reports Server (NTRS)
Blaauw, R.; Campbell-Brown, M.; Cooke, W.; Kingery, A.; Weryk, R.; Gill, J.
2014-01-01
NASA's Meteoroid Environment Office (MEO) recently established a two-station system to calculate daily automated meteor fluxes in the millimeter-size-range for both single-station and double-station meteors. The cameras each consist of a 17 mm focal length Schneider lens (f/0.95) on a Watec 902H2 Ultimate CCD video camera, producing a 21.7x15.5 degree field of view. This configuration sees meteors down to a magnitude of +6. This paper outlines the concepts of the system, the hardware and software, and results of 3,000+ orbits from the first 18 months of operations. Video from the cameras are run through ASGARD (All Sky and Guided Automatic Real-time Detection), which performs the meteor detection/photometry, and invokes MILIG and MORB (Borovicka 1990) codes to determine the trajectory, speed, and orbit of the meteor. A subroutine in ASGARD allows for approximate shower identification in single-station detections. The ASGARD output is used in routines to calculate the flux. Before a flux can be calculated, a weather algorithm indicates if sky conditions are clear enough to calculate fluxes, at which point a limiting magnitude algorithm is employed. The limiting stellar magnitude is found using astrometry.net (Lang et al. 2012) to identify stars and translated to the corresponding shower and sporadic limiting meteor magnitude. It is found every 10 minutes and is able to react to quickly changing sky conditions. The extensive testing of these results on the Geminids and Eta Aquariids is shown. The flux involves dividing the number of meteors by the collecting area of the system, over the time interval for which that collecting area is valid. The flux algorithm employed here differs from others currently in use in that it does not make the gross oversimplication of choosing a single height to calculate the collection area of the system. In the MEO system, the volume is broken up into a set of height intervals, with the collecting areas determined by the position of the active shower or sporadic source radiant. The flux per height interval is calculated and summed to obtain the total meteor flux. Both single station and double station fluxes are currently found daily. Geminid fluxes on the peak night in 2012 (12-14-2012) were 0.058 meteors/km2/hr as found with double-station meteors and 0.057 meteors/ km2/hr as found with single-station meteors, to a limiting magnitude of +6.5. Both of those numbers are in agreement with the well-calibrated fluxes from the Canadian Meteor Orbit Radar. Along with flux algorithms and initial flux results, presented will be results from the first 18 months of operation, covering 3,000+ meteoroid orbits.
Hart, George W.; Kern, Jr., Edward C.
1987-06-09
An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer.
Diurnal forcing of planetary atmospheres
NASA Technical Reports Server (NTRS)
Houben, Howard C.
1991-01-01
The utility of the Mars Planetary Boundary Layer Model (MPBL) for calculations in support of the Mars 94 balloon mission was substantially enhanced by the introduction of a balloon equation of motion into the model. Both vertical and horizontal excursions of the balloon are calculated along with its volume, temperature, and pressure. The simulations reproduce the expected 5-min vertical oscillations of a constant density balloon at altitude on Mars. The results of these calculations are presented for the nominal target location of the balloon. A nonlinear balanced model was developed for the Martian atmosphere. It was used to initialize a primitive equation model for the simulations of the Earth's atmosphere at the time of the El Chichon eruption in 1982. It is also used as an assimilation model to update the temperature and wind fields at frequent intervals.
Hart, G.W.; Kern, E.C. Jr.
1987-06-09
An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer. 24 figs.
Renyi Entropy of the Ideal Gas in Finite Momentum Intervals
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.
2003-06-01
Coincidence probabilities of multiparticle events, as measured in finite momentum intervals for Bose and Fermi ideal gas, are calculated and compared with the exact expressions given in statistical physics.
Zhang, Gao-Ming; Guo, Xu-Xiao; Ma, Xiao-Bo; Zhang, Guo-Ming
2016-01-01
Background The aim of this study was to calculate 95% reference intervals and double-sided limits of serum alpha-fetoprotein (AFP) and carcinoembryonic antigen (CEA) according to the CLSI EP28-A3 guideline. Material/Methods Serum AFP and CEA values were measured in samples from 26 000 healthy subjects in the Shuyang area receiving general health checkups. The 95% reference intervals and upper limits were calculated by using MedCalc. Results We provided continuous reference intervals from 20 years old to 90 years old for AFP and CEA. The reference intervals were: AFP, 1.31–7.89 ng/ml (males) and 1.01–7.10 ng/ml (females); CEA, 0.51–4.86 ng/ml (males) and 0.35–3.45ng/ml (females). AFP and CEA were significantly positively correlated with age in both males (r=0.196 and r=0.198) and females (r=0.121 and r=0.197). Conclusions Different races or populations and different detection systems may result in different reference intervals for AFP and CEA. Continuous reference intervals of age changes are more accurate than age groups. PMID:27941709
Speech rhythm in Kannada speaking adults who stutter.
Maruthy, Santosh; Venugopal, Sahana; Parakh, Priyanka
2017-10-01
A longstanding hypothesis about the underlying mechanisms of stuttering suggests that speech disfluencies may be associated with problems in timing and temporal patterning of speech events. Fifteen adults who do and do not stutter read five sentences, and from these, the vocalic and consonantal durations were measured. Using these, pairwise variability index (raw PVI for consonantal intervals and normalised PVI for vocalic intervals) and interval based rhythm metrics (PercV, DeltaC, DeltaV, VarcoC and VarcoV) were calculated for all the participants. Findings suggested higher mean values in adults who stutter when compared to adults who do not stutter for all the rhythm metrics except for VarcoV. Further, statistically significant difference between the two groups was found for all the rhythm metrics except for VarcoV. Combining the present results with consistent prior findings based on rhythm deficits in children and adults who stutter, there appears to be strong empirical support for the hypothesis that individuals who stutter may have deficits in generation of rhythmic speech patterns.
Jithesh, C; Venkataramana, V; Penumatsa, Narendravarma; Reddy, S N; Poornima, K Y; Rajasigamani, K
2015-08-01
To determine and compare the potential difference of nickel release from three different orthodontic brackets, in different artificial pH, in different time intervals. Twenty-seven samples of three different orthodontic brackets were selected and grouped as 1, 2, and 3. Each group was divided into three subgroups depending on the type of orthodontic brackets, salivary pH and the time interval. The Nickel release from each subgroup were analyzed by using inductively coupled plasma-Atomic Emission Spectrophotometer (Perkin Elmer, Optima 2100 DV, USA) model. Quantitative analysis of nickel was performed three times, and the mean value was used as result. ANOVA (F-test) was used to test the significant difference among the groups at 0.05 level of significance (P < 0.05). The descriptive method of statistics was used to calculate the mean, standard deviation, minimum and maximum. SPSS 18 software ((SPSS.Ltd, Quarry bay, Hong Kong, PASW-statistics 18) was used to analyze the study. The analysis shows a significant difference between three groups. The study shows that the nickel releases from the recycled stainless steel brackets have the highest at all 4.2 pH except in 120 h. The study result shows that the nickel release from the recycled stainless steel brackets is highest. Metal slot ceramic bracket release significantly less nickel. So, recycled stainless steel brackets should not be used for nickel allergic patients. Metal slot ceramic brackets are advisable.
Jithesh, C.; Venkataramana, V.; Penumatsa, Narendravarma; Reddy, S. N.; Poornima, K. Y.; Rajasigamani, K.
2015-01-01
Objectives: To determine and compare the potential difference of nickel release from three different orthodontic brackets, in different artificial pH, in different time intervals. Materials and Methods: Twenty-seven samples of three different orthodontic brackets were selected and grouped as 1, 2, and 3. Each group was divided into three subgroups depending on the type of orthodontic brackets, salivary pH and the time interval. The Nickel release from each subgroup were analyzed by using inductively coupled plasma-Atomic Emission Spectrophotometer (Perkin Elmer, Optima 2100 DV, USA) model. Quantitative analysis of nickel was performed three times, and the mean value was used as result. ANOVA (F-test) was used to test the significant difference among the groups at 0.05 level of significance (P < 0.05). The descriptive method of statistics was used to calculate the mean, standard deviation, minimum and maximum. SPSS 18 software ((SPSS.Ltd, Quarry bay, Hong Kong, PASW-statistics 18) was used to analyze the study. Result: The analysis shows a significant difference between three groups. The study shows that the nickel releases from the recycled stainless steel brackets have the highest at all 4.2 pH except in 120 h. Conclusion: The study result shows that the nickel release from the recycled stainless steel brackets is highest. Metal slot ceramic bracket release significantly less nickel. So, recycled stainless steel brackets should not be used for nickel allergic patients. Metal slot ceramic brackets are advisable. PMID:26538924
Energy-momentum tensor of bouncing gravitons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iofa, Mikhail Z.
2015-07-14
In models of the Universe with extra dimensions gravity propagates in the whole space-time. Graviton production by matter on the brane is significant in the early hot Universe. In a model of 3-brane with matter embedded in 5D space-time conditions for gravitons emitted from the brane to the bulk to return back to the brane are found. For a given 5-momentum of graviton falling back to the brane the interval between the times of emission and return to the brane is calculated. A method to calculate contribution to the energy-momentum tensor from multiple graviton bouncings is developed. Explicit expressions formore » contributions to the energy-momentum tensor of gravitons which have made one, two and three bounces are obtained and their magnitudes are numerically calculated. These expressions are used to solve the evolution equation for dark radiation. A relation connecting reheating temperature and the scale of extra dimension is obtained. For the reheating temperature T{sub R}∼10{sup 6} GeV we estimate the scale of extra dimension μ to be of order 10{sup −9} GeV (μ{sup −1}∼10{sup −5} cm)« less
Energy-momentum tensor of bouncing gravitons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iofa, Mikhail Z., E-mail: iofa@theory.sinp.msu.ru
2015-07-01
In models of the Universe with extra dimensions gravity propagates in the whole space-time. Graviton production by matter on the brane is significant in the early hot Universe. In a model of 3-brane with matter embedded in 5D space-time conditions for gravitons emitted from the brane to the bulk to return back to the brane are found. For a given 5-momentum of graviton falling back to the brane the interval between the times of emission and return to the brane is calculated. A method to calculate contribution to the energy-momentum tensor from multiple graviton bouncings is developed. Explicit expressions formore » contributions to the energy-momentum tensor of gravitons which have made one, two and three bounces are obtained and their magnitudes are numerically calculated. These expressions are used to solve the evolution equation for dark radiation. A relation connecting reheating temperature and the scale of extra dimension is obtained. For the reheating temperature T{sub R}∼ 10{sup 6} GeV we estimate the scale of extra dimension μ to be of order 10{sup −9} GeV (μ{sup −1}∼ 10{sup −5} cm)« less
NASA Astrophysics Data System (ADS)
Levesh, J. L.; McLindon, C.; Kulp, M. A.
2017-12-01
An in-depth field study of the Delacroix Island producing field illustrates the evolution of the main East-West trending Delacroix Island fault over the last thirteen million years. Well log correlation and 3-D seismic interpretation of eighteen bio-stratigraphic horizons across the fault reveal a range of stratigraphic thicknesses. A cross section, created with wells upthrown and downthrown to the fault, visually demonstrates varying degrees of thickening and displacement of the stratigraphic intervals across the fault. One upthrown and one downthrown well, with well log curve data up to 30 meters below the surface, were used to calculate interval thicknesses between the main tops as well as five more Pliocene/Pleistocene biostratigraphic markers. Isopach maps, created with these interval thicknesses, depict two styles of interval thickening both of which indicate differential subsidence across the fault. An interval thickness analysis was plotted in both depth and time as well as plots showing the rate of sediment accumulation and depth versus fault displacement. A lineation on the marsh surface consistent with a projection of the fault plane suggests that the fault movement has been episodically continuous to the present and that recent movement may have played a role in submerging the downthrown side of the surface fault trace.
Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A
1994-01-01
In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.
Vairo, Karine P; Corrêa, Rodrigo C; Lecheta, Melise C; Caneparo, Maria F; Mise, Kleber M; Preti, Daniel; de Carvalho, Claudio J B; Almeida, Lucia M; Moura, Mauricio O
2015-01-01
Southern Brazil is unique due to its subtropical climate. Here, we report on the first forensic entomology case and the first record of Sarconesia chlorogaster (Wiedemann) in a human corpse in this region. Flies' samples were collected from a body indoors at 20°C. Four species were found, but only Chrysomya albiceps (Wiedemann) and S. chlorogaster were used to estimate the minimum postmortem interval (mPMI). The mPMI was calculated using accumulated degree hour (ADH) and developmental time. The S. chlorogaster puparium collected was light in color, so we used an experiment to establish a more accurate estimate for time since initiation of pupation where we found full tanning after 3 h. Development of C. albiceps at 20°C to the end of the third instar is 7.4 days. The mPMI based on S. chlorogaster (developmental time until the third instar with no more than 3 h of pupae development) was 7.6 days. © 2014 American Academy of Forensic Sciences.
In-Situ Three-Dimensional Shape Rendering from Strain Values Obtained Through Optical Fiber Sensors
NASA Technical Reports Server (NTRS)
Chan, Hon Man (Inventor); Parker, Jr., Allen R. (Inventor)
2015-01-01
A method and system for rendering the shape of a multi-core optical fiber or multi-fiber bundle in three-dimensional space in real time based on measured fiber strain data. Three optical fiber cores arc arranged in parallel at 120.degree. intervals about a central axis. A series of longitudinally co-located strain sensor triplets, typically fiber Bragg gratings, are positioned along the length of each fiber at known intervals. A tunable laser interrogates the sensors to detect strain on the fiber cores. Software determines the strain magnitude (.DELTA.L/L) for each fiber at a given triplet, but then applies beam theory to calculate curvature, beading angle and torsion of the fiber bundle, and from there it determines the shape of the fiber in s Cartesian coordinate system by solving a series of ordinary differential equations expanded from the Frenet-Serrat equations. This approach eliminates the need for computationally time-intensive curve-tilting and allows the three-dimensional shape of the optical fiber assembly to be displayed in real-time.
Stabilization of memory States by stochastic facilitating synapses.
Miller, Paul
2013-12-06
Bistability within a small neural circuit can arise through an appropriate strength of excitatory recurrent feedback. The stability of a state of neural activity, measured by the mean dwelling time before a noise-induced transition to another state, depends on the neural firing-rate curves, the net strength of excitatory feedback, the statistics of spike times, and increases exponentially with the number of equivalent neurons in the circuit. Here, we show that such stability is greatly enhanced by synaptic facilitation and reduced by synaptic depression. We take into account the alteration in times of synaptic vesicle release, by calculating distributions of inter-release intervals of a synapse, which differ from the distribution of its incoming interspike intervals when the synapse is dynamic. In particular, release intervals produced by a Poisson spike train have a coefficient of variation greater than one when synapses are probabilistic and facilitating, whereas the coefficient of variation is less than one when synapses are depressing. However, in spite of the increased variability in postsynaptic input produced by facilitating synapses, their dominant effect is reduced synaptic efficacy at low input rates compared to high rates, which increases the curvature of neural input-output functions, leading to wider regions of bistability in parameter space and enhanced lifetimes of memory states. Our results are based on analytic methods with approximate formulae and bolstered by simulations of both Poisson processes and of circuits of noisy spiking model neurons.
Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R
2017-11-17
This article investigated and compared frequency domain and time domain characteristics of drivers' behaviors before and after the start of distracted driving. Data from an existing naturalistic driving study were used. Fast Fourier transform (FFT) was applied for the frequency domain analysis to explore drivers' behavior pattern changes between nondistracted (prestarting of visual-manual task) and distracted (poststarting of visual-manual task) driving periods. Average relative spectral power in a low frequency range (0-0.5 Hz) and the standard deviation in a 10-s time window of vehicle control variables (i.e., lane offset, yaw rate, and acceleration) were calculated and further compared. Sensitivity analyses were also applied to examine the reliability of the time and frequency domain analyses. Results of the mixed model analyses from the time and frequency domain analyses all showed significant degradation in lateral control performance after engaging in visual-manual tasks while driving. Results of the sensitivity analyses suggested that the frequency domain analysis was less sensitive to the frequency bandwidth, whereas the time domain analysis was more sensitive to the time intervals selected for variation calculations. Different time interval selections can result in significantly different standard deviation values, whereas average spectral power analysis on yaw rate in both low and high frequency bandwidths showed consistent results, that higher variation values were observed during distracted driving when compared to nondistracted driving. This study suggests that driver state detection needs to consider the behavior changes during the prestarting periods, instead of only focusing on periods with physical presence of distraction, such as cell phone use. Lateral control measures can be a better indicator of distraction detection than longitudinal controls. In addition, frequency domain analyses proved to be a more robust and consistent method in assessing driving performance compared to time domain analyses.
Analyzing time-ordered event data with missed observations.
Dokter, Adriaan M; van Loon, E Emiel; Fokkema, Wimke; Lameris, Thomas K; Nolet, Bart A; van der Jeugd, Henk P
2017-09-01
A common problem with observational datasets is that not all events of interest may be detected. For example, observing animals in the wild can difficult when animals move, hide, or cannot be closely approached. We consider time series of events recorded in conditions where events are occasionally missed by observers or observational devices. These time series are not restricted to behavioral protocols, but can be any cyclic or recurring process where discrete outcomes are observed. Undetected events cause biased inferences on the process of interest, and statistical analyses are needed that can identify and correct the compromised detection processes. Missed observations in time series lead to observed time intervals between events at multiples of the true inter-event time, which conveys information on their detection probability. We derive the theoretical probability density function for observed intervals between events that includes a probability of missed detection. Methodology and software tools are provided for analysis of event data with potential observation bias and its removal. The methodology was applied to simulation data and a case study of defecation rate estimation in geese, which is commonly used to estimate their digestive throughput and energetic uptake, or to calculate goose usage of a feeding site from dropping density. Simulations indicate that at a moderate chance to miss arrival events ( p = 0.3), uncorrected arrival intervals were biased upward by up to a factor 3, while parameter values corrected for missed observations were within 1% of their true simulated value. A field case study shows that not accounting for missed observations leads to substantial underestimates of the true defecation rate in geese, and spurious rate differences between sites, which are introduced by differences in observational conditions. These results show that the derived methodology can be used to effectively remove observational biases in time-ordered event data.
Determination of strong ion gap in healthy dogs.
Fettig, Pamela K; Bailey, Dennis B; Gannon, Kristi M
2012-08-01
To determine and compare reference intervals of the strong ion gap (SIG) in a group of healthy dogs determined with 2 different equations. Prospective observational study. Tertiary referral and teaching hospital. Fifty-four healthy dogs. None. Serum biochemistry and blood gas analyses were performed for each dog. From these values, SIG was calculated using 2 different equations: SIG(1) = SID(a) {[Na (+)] + [K(+)] - [Cl(-)]+ [2 × Ca(2+)] + [2 × Mg(2+)] - [L-lactate]}- SID(e) {TCO(2) + A(-)} and SIG(2) = [albumin] × 4.9-anion gap. Reference intervals were established for each SIG equation using the mean ± 1.96 × standard deviation (SD). For SIG(1), the median was 7.13 mEq/L (range, 1.05-11.30 mEq/L) and the derived reference interval was 1.85-10.61 mEq/L. Median SIG(2) was -0.22 mEq/L (range, -5.34-6.61 mEq/L) and the mean SIG(2) was -0.09 mEq/L (95% confidence interval for the mean, -0.82-0.65 mEq/L). The derived reference interval was -5.36-5.18 mEq/L. The results of the SIG calculations were significantly different (P < 0.0001) between the 2 equations used. The 2 equations used to calculate SIG yielded significantly different results and cannot be used interchangeably. The authors believe SIG(2) to be a more accurate reflection of acid-base status in healthy dogs, and recommend that this calculation be used for future studies. © Veterinary Emergency and Critical Care Society 2012.
Unsteady Flow About Porous Cambered Plates
1988-06-01
regular time intervals, and evolution of the vortex wake is calculated through the use of the velocities induced at each vortex location. Furthermore... Vorte Poiin o r C 22. at-1.54 o -. 38 . °°" . * ° 2 .- * *o C,, * .* I l * 0••.. . • .• 9• . " 0 - - .-. - - 9 Figure 24. Wake Vortex Positions for...Codes 18 Subject Terms (continue on reverse if necessary and identify by block number) Field Group Subgroup Unsteady Flow, Discrete Vortex Analysis
NASA Astrophysics Data System (ADS)
Vasić, M.; Radojević, Z.
2017-08-01
One of the main disadvantages of the recently reported method, for setting up the drying regime based on the theory of moisture migration during drying, lies in a fact that it is based on a large number of isothermal experiments. In addition each isothermal experiment requires the use of different drying air parameters. The main goal of this paper was to find a way how to reduce the number of isothermal experiments without affecting the quality of the previously proposed calculation method. The first task was to define the lower and upper inputs as well as the output of the “black box” which will be used in the Box-Wilkinson’s orthogonal multi-factorial experimental design. Three inputs (drying air temperature, humidity and velocity) were used within the experimental design. The output parameter of the model represents the time interval between any two chosen characteristic points presented on the Deff - t. The second task was to calculate the output parameter for each planed experiments. The final output of the model is the equation which can predict the time interval between any two chosen characteristic points as a function of the drying air parameters. This equation is valid for any value of the drying air parameters which are within the defined area designated with lower and upper limiting values.
A gravimetric technique for evaluating flow continuity from two infusion devices.
Leff, R D; True, W R; Roberts, R J
1987-06-01
A computerized gravimetric technique for examining the flow continuity from infusion devices was developed, and two infusion devices with different mechanisms of pump operation were evaluated to illustrate this technique. A BASIC program that records serial weight measurements and calculates weight change from previous determinations was written for and interfaced with a gravimetric balance and IBM PC. A plot of effused weight (normalized weight change that reflects the difference between desired timed-sample interval and actual time) versus time (desired timed-sample interval) was constructed. The gravimetric technique was evaluated using both a peristaltic-type and a piston-type infusion pump. Intravenous solution (5% dextrose and 0.9% sodium chloride) was effused at 10 mL/hr and collected in a beaker. Weights were measured at 10-second intervals over a two-hour infusion period, and the weights of the effused solution were plotted versus time. Flow continuity differed between the two infusion devices. Actual effused weight decreased to 0.007 g/10 sec during the refill cycle of the piston-type pump; the mean (+/- S.D.) effused weight was 0.029 +/- 0.002 g/10 sec. The desired effusion rate was 0.028 g/10 sec. The peristaltic pump had greater flow continuity, with a mean effusion weight of 0.028 +/- 0.003 g/10 sec. The gravimetric technique described in this report can be used to quantitatively depict the effusion profiles of infusion devices. Further studies are needed to identify the degree of flow continuity that is clinically acceptable for infusion devices.
Quantum field-theoretical description of neutrino and neutral kaon oscillations
NASA Astrophysics Data System (ADS)
Volobuev, Igor P.
2018-05-01
It is shown that the neutrino and neutral kaon oscillation processes can be consistently described in quantum field theory using only plane waves of the mass eigenstates of neutrinos and neutral kaons. To this end, the standard perturbative S-matrix formalism is modified so that it can be used for calculating the amplitudes of the processes passing at finite distances and finite time intervals. The distance-dependent and time-dependent parts of the amplitudes of the neutrino and neutral kaon oscillation processes are calculated and the results turn out to be in accordance with those of the standard quantum mechanical description of these processes based on the notion of neutrino flavor states and neutral kaon states with definite strangeness. However, the physical picture of the phenomena changes radically: now, there are no oscillations of flavor or definite strangeness states, but, instead of it, there is interference of amplitudes due to different virtual mass eigenstates.
Ultrasonic sensing of GMAW: Laser/EMAT defect detection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, N.M.; Johnson, J.A.; Larsen, E.D.
1992-08-01
In-process ultrasonic sensing of welding allows detection of weld defects in real time. A noncontacting ultrasonic system is being developed to operate in a production environment. The principal components are a pulsed laser for ultrasound generation and an electromagnetic acoustic transducer (EMAT) for ultrasound reception. A PC-based data acquisition system determines the quality of the weld on a pass-by-pass basis. The laser/EMAT system interrogates the area in the weld volume where defects are most likely to occur. This area of interest is identified by computer calculations on a pass-by-pass basis using weld planning information provided by the off-line programmer. Themore » absence of a signal above the threshold level in the computer-calculated time interval indicates a disruption of the sound path by a defect. The ultrasonic sensor system then provides an input signal to the weld controller about the defect condition. 8 refs.« less
Surface tension and modeling of cellular intercalation during zebrafish gastrulation.
Calmelet, Colette; Sepich, Diane
2010-04-01
In this paper we discuss a model of zebrafish embryo notochord development based on the effect of surface tension of cells at the boundaries. We study the process of interaction of mesodermal cells at the boundaries due to adhesion and cortical tension, resulting in cellular intercalation. From in vivo experiments, we obtain cell outlines of time-lapse images of cell movements during zebrafish embryo development. Using Cellular Potts Model, we calculate the total surface energy of the system of cells at different time intervals at cell contacts. We analyze the variations of total energy depending on nature of cell contacts. We demonstrate that our model can be viable by calculating the total surface energy value for experimentally observed configurations of cells and showing that in our model these configurations correspond to a decrease in total energy values in both two and three dimensions.
Demonstration of Johnson noise thermometry with all-superconducting quantum voltage noise source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, Takahiro, E-mail: yamada-takahiro@aist.go.jp; Urano, Chiharu; Maezawa, Masaaki
We present a Johnson noise thermometry (JNT) system based on an integrated quantum voltage noise source (IQVNS) that has been fully implemented using superconducting circuit technology. To enable precise measurement of Boltzmann's constant, an IQVNS chip was designed to produce intrinsically calculable pseudo-white noise to calibrate the JNT system. On-chip real-time generation of pseudo-random codes via simple circuits produced pseudo-voltage noise with a harmonic tone interval of less than 1 Hz, which was one order of magnitude finer than the harmonic tone interval of conventional quantum voltage noise sources. We estimated a value for Boltzmann's constant experimentally by performing JNT measurementsmore » at the temperature of the triple point of water using the IQVNS chip.« less
A fatigue monitoring system based on time-domain and frequency-domain analysis of pulse data
NASA Astrophysics Data System (ADS)
Shen, Jiaai
2018-04-01
Fatigue is almost a problem that everyone would face, and a psychosis that everyone hates. If we can test people's fatigue condition and remind them of the tiredness, dangers in life, for instance, traffic accidents and sudden death will be effectively reduced, people's fatigued operations will be avoided. And people can be assisted to have access to their own and others' physical condition in time to alternate work with rest. The article develops a wearable bracelet based on FFT Pulse Frequency Spectrum Analysis and IBI's standard deviation and range calculation, according to people's heart rate (BPM) and inter-beat interval (IBI) while being tired and conscious. The hardware part is based on Arduino, pulse rate sensor, and Bluetooth module, and the software part is relied on network micro database and APP. By doing sample experiment to get more accurate standard value to judge tiredness, we prove that we can judge people's fatigue condition based on heart rate (BPM) and inter-beat interval (IBI).
Dynamics of snoring sounds and its connection with obstructive sleep apnea
NASA Astrophysics Data System (ADS)
Alencar, Adriano M.; da Silva, Diego Greatti Vaz; Oliveira, Carolina Beatriz; Vieira, André P.; Moriya, Henrique T.; Lorenzi-Filho, Geraldo
2013-01-01
Snoring is extremely common in the general population and when irregular may indicate the presence of obstructive sleep apnea. We analyze the overnight sequence of wave packets - the snore sound - recorded during full polysomnography in patients referred to the Sleep Laboratory due to suspected obstructive sleep apnea. We hypothesize that irregular snore, with duration in the range between 10 and 100 s, correlates with respiratory obstructive events. We find that the number of irregular snores - easily accessible, and quantified by what we call the snore time interval index (STII) - is in good agreement with the well-known apnea-hypopnea index, which expresses the severity of obstructive sleep apnea and is extracted only from polysomnography. In addition, the Hurst analysis of the snore sound itself, which calculates the fluctuations in the signal as a function of time interval, is used to build a classifier that is able to distinguish between patients with no or mild apnea and patients with moderate or severe apnea.
Early declaration of death by neurologic criteria results in greater organ donor potential.
Resnick, Shelby; Seamon, Mark J; Holena, Daniel; Pascual, Jose; Reilly, Patrick M; Martin, Niels D
2017-10-01
Aggressive management of patients prior to and after determination of death by neurologic criteria (DNC) is necessary to optimize organ recovery, transplantation, and increase the number of organs transplanted per donor (OTPD). The effects of time management are understudied but potentially pivotal component. The objective of this study was to analyze specific time points (time to DNC, time to procurement) and the time intervals between them to better characterize the optimal timeline of organ donation. Using data over a 5-year time period (2011-2015) from the largest US OPO, all patients with catastrophic brain injury and donated transplantable organs were retrospectively reviewed. Active smokers were excluded. Maximum donor potential was seven organs (heart, lungs [2], kidneys [2], liver, and pancreas). Time from admission to declaration of DNC and donation was calculated. Mean time points stratified by specific organ procurement rates and overall OTPD were compared using unpaired t-test. Of 1719 Declaration of Death by Neurologic Criteria organ donors, 381 were secondary to head trauma. Smokers and organs recovered but not transplanted were excluded leaving 297 patients. Males comprised 78.8%, the mean age was 36.0 (±16.8) years, and 87.6% were treated at a trauma center. Higher donor potential (>4 OTPD) was associated with shorter average times from admission to brain death; 66.6 versus 82.2 hours, P = 0.04. Lung donors were also associated with shorter average times from admission to brain death; 61.6 versus 83.6 hours, P = 0.004. The time interval from DNC to donation varied minimally among groups and did not affect donation rates. A shorter time interval between admission and declaration of DNC was associated with increased OTPD, especially lungs. Further research to identify what role timing plays in the management of the potential organ donor and how that relates to donor management goals is needed. Copyright © 2017 Elsevier Inc. All rights reserved.
Associations between dairy cow inter-service interval and probability of conception.
Remnant, J G; Green, M J; Huxley, J N; Hudson, C D
2018-07-01
Recent research has indicated that the interval between inseminations in modern dairy cattle is often longer than the commonly accepted cycle length of 18-24 days. This study analysed 257,396 inseminations in 75,745 cows from 312 herds in England and Wales. The interval between subsequent inseminations in the same cow in the same lactation (inter-service interval, ISI) were calculated and inseminations categorised as successful or unsuccessful depending on whether there was a corresponding calving event. Conception risk was calculated for each individual ISI between 16 and 28 days. A random effects logistic regression model was fitted to the data with pregnancy as the outcome variable and ISI (in days) included in the model as a categorical variable. The modal ISI was 22 days and the peak conception risk was 44% for ISIs of 21 days rising from 27% at 16 days. The logistic regression model revealed significant associations of conception risk with ISI as well as 305 day milk yield, insemination number, parity and days in milk. Predicted conception risk was lower for ISIs of 16, 17 and 18 days and higher for ISIs of 20, 21 and 22 days compared to 25 day ISIs. A mixture model was specified to identify clusters in insemination frequency and conception risk for ISIs between 3 and 50 days. A "high conception risk, high insemination frequency" cluster was identified between 19 and 26 days which indicated that this time period was the true latent distribution for ISI with optimal reproductive outcome. These findings suggest that the period of increased numbers of inseminations around 22 days identified in existing work coincides with the period of increased probability of conception and therefore likely represents true return estrus events. Copyright © 2018 Elsevier Inc. All rights reserved.
Perez, Anne E; Haskell, Neal H; Wells, Jeffrey D
2014-08-01
Carrion insect succession patterns have long been used to estimate the postmortem interval (PMI) during a death investigation. However, no published carrion succession study included sufficient replication to calculate a confidence interval about a PMI estimate based on occurrence data. We exposed 53 pig carcasses (16±2.5 kg), near the likely minimum needed for such statistical analysis, at a site in north-central Indiana, USA, over three consecutive summer seasons. Insects and Collembola were sampled daily from each carcass for a total of 14 days, by this time each was skeletonized. The criteria for judging a life stage of a given species to be potentially useful for succession-based PMI estimation were (1) nonreoccurrence (observed during a single period of presence on a corpse), and (2) found in a sufficiently large proportion of carcasses to support a PMI confidence interval. For this data set that proportion threshold is 45/53. Of the 266 species collected and identified, none was nonreoccuring in that each showed at least a gap of one day on a single carcass. If the definition of nonreoccurrence is relaxed to include such a single one-day gap the larval forms of Necrophilaamericana, Fanniascalaris, Cochliomyia macellaria, Phormiaregina, and Luciliaillustris satisfied these two criteria. Adults of Creophilus maxillosus, Necrobiaruficollis, and Necrodessurinamensis were common and showed only a few, single-day gaps in occurrence. C.maxillosus, P.regina, and L.illustris displayed exceptional forensic utility in that they were observed on every carcass. Although these observations were made at a single site during one season of the year, the species we found to be useful have large geographic ranges. We suggest that future carrion insect succession research focus only on a limited set of species with high potential forensic utility so as to reduce sample effort per carcass and thereby enable increased experimental replication. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Baek, Hyun Jae; Shin, JaeWook
2017-08-15
Most of the wrist-worn devices on the market provide a continuous heart rate measurement function using photoplethysmography, but have not yet provided a function to measure the continuous heart rate variability (HRV) using beat-to-beat pulse interval. The reason for such is the difficulty of measuring a continuous pulse interval during movement using a wearable device because of the nature of photoplethysmography, which is susceptible to motion noise. This study investigated the effect of missing heart beat interval data on the HRV analysis in cases where pulse interval cannot be measured because of movement noise. First, we performed simulations by randomly removing data from the RR interval of the electrocardiogram measured from 39 subjects and observed the changes of the relative and normalized errors for the HRV parameters according to the total length of the missing heart beat interval data. Second, we measured the pulse interval from 20 subjects using a wrist-worn device for 24 h and observed the error value for the missing pulse interval data caused by the movement during actual daily life. The experimental results showed that mean NN and RMSSD were the most robust for the missing heart beat interval data among all the parameters in the time and frequency domains. Most of the pulse interval data could not be obtained during daily life. In other words, the sample number was too small for spectral analysis because of the long missing duration. Therefore, the frequency domain parameters often could not be calculated, except for the sleep state with little motion. The errors of the HRV parameters were proportional to the missing data duration in the presence of missing heart beat interval data. Based on the results of this study, the maximum missing duration for acceptable errors for each parameter is recommended for use when the HRV analysis is performed on a wrist-worn device.
Aris, Izzuddin M; Bernard, Jonathan Y; Chen, Ling-Wei; Tint, Mya Thway; Lim, Wai Yee; Soh, Shu E; Saw, Seang-Mei; Shek, Lynette Pei-Chi; Godfrey, Keith M; Gluckman, Peter D; Chong, Yap-Seng; Yap, Fabian; Kramer, Michael S; Lee, Yung Seng
2017-01-01
Objective There have been hypotheses that early life adiposity gain may influence blood pressure (BP) later in life. We examined associations between timing of height, body mass index (BMI) and adiposity gains in early life with BP at 48 months in an Asian pregnancy-birth cohort. Methods In 719 children, velocities for height, BMI and abdominal circumference (AC) were calculated at five intervals [0-3, 3-12, 12-24, 24-36 and 36-48 months]. Triceps (TS) and subscapular skinfold (SS) velocities were calculated between 0-18, 18-36 and 36-48 months. Systolic (SBP) and diastolic blood pressure (DBP) was measured at 48 months. Growth velocities at later periods were adjusted for growth velocities in preceding intervals as well as measurements at birth. Results After adjusting for confounders and child height at BP measurement, each unit z-score gain in BMI, AC, TS and SS velocities at 36-48 months were associated with 2.3 (95% CI:1.6, 3.1), 2.1 (1.3, 2.8), 1.4 (0.6, 2.2) and 1.8 (1.0, 2.6) mmHg higher SBP respectively, and 0.9 (0.4, 1.4), 0.9 (0.4, 1.3), 0.6 (0.1, 1.1) and 0.8 (0.3, 1.3) mmHg higher DBP respectively. BMI and adiposity velocities (AC, TS or SS) at various intervals in the first 36 months however, were not associated with BP. Faster BMI, AC, TS and SS velocities, but not height, at 36-48 months were associated with 0.22 (0.15, 0.29), 0.17 (0.10, 0.24), 0.11 (0.04, 0.19) and 0.15 (0.08, 0.23) units higher SBP z-score respectively, and OR=1.46 (95% CI: 1.13-1.90), 1.49 (1.17-1.92), 1.45 (1.09-1.92) and 1.43 (1.09, 1.88) times higher risk of prehypertension/hypertension respectively at 48 months. Conclusion Our results indicated that faster BMI and adiposity (AC, TS or SS) velocities only at the preceding interval before 48 months (36-48 months), but not at earlier intervals in the first 36 months, are predictive of BP and prehypertension/hypertension at 48 months. PMID:28186098
A Very Simple Method to Calculate the (Positive) Largest Lyapunov Exponent Using Interval Extensions
NASA Astrophysics Data System (ADS)
Mendes, Eduardo M. A. M.; Nepomuceno, Erivelton G.
2016-12-01
In this letter, a very simple method to calculate the positive Largest Lyapunov Exponent (LLE) based on the concept of interval extensions and using the original equations of motion is presented. The exponent is estimated from the slope of the line derived from the lower bound error when considering two interval extensions of the original system. It is shown that the algorithm is robust, fast and easy to implement and can be considered as alternative to other algorithms available in the literature. The method has been successfully tested in five well-known systems: Logistic, Hénon, Lorenz and Rössler equations and the Mackey-Glass system.
Donald B.K. English
2000-01-01
In this paper I use bootstrap procedures to develop confidence intervals for estimates of total industrial output generated per thousand tourist visits. Mean expenditures from replicated visitor expenditure data included weights to correct for response bias. Impacts were estimated with IMPLAN. Ninety percent interval endpoints were 6 to 16 percent above or below the...
SU-E-P-05: Is Routine Treatment Planning System Quality Assurance Necessary?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alaei, P
Purpose: To evaluate the variation of dose calculations using a treatment planning system (TPS) over a two year period and assessment of the need for TPS QA on regular intervals. Methods: Two phantoms containing solid water and lung- and bone-equivalent heterogeneities were constructed in two different institutions for the same brand treatment planning system. Multiple plans, consisting of photons and electron beams, including IMRT and VMAT ones, were created and calculated on the phantoms. The accuracy of dose computation in the phantoms was evaluated at the onset by dose measurements within the phantoms. The dose values at up to 24more » points of interest (POI) within the solid water, lung, and bone slabs, as well as mean doses to several regions of interest (ROI), were re-calculated over a two-year period which included two software upgrades. The variations in POI and ROI dose values were analyzed and evaluated. Results: The computed doses vary slightly month-over-month. There are noticeable variations at the times of software upgrade, if the upgrade involves remodeling and/or re-commissioning of the beams. The variations are larger in certain points within the phantom, usually in the buildup region or near interfaces, and are almost non-existent for electron beams. Conclusion: Routine TPS QA is recommended by AAPM and other professional societies, and is often required by accreditation organizations. The frequency and type of QA, though, is subject to debate. The results presented here demonstrate that the frequency of these tests could be at longer intervals than monthly. However, it is essential to perform TPS QA at the time of commissioning and after each software upgrade.« less
NASA Astrophysics Data System (ADS)
Donahue, William; Newhauser, Wayne D.; Ziegler, James F.
2016-09-01
Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.
Donahue, William; Newhauser, Wayne D; Ziegler, James F
2016-09-07
Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.
2014-01-01
Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829
Iversen, B S; Sabbioni, E; Fortaner, S; Pietra, R; Nicolotti, A
2003-01-20
Statistical data treatment is a key point in the assessment of trace element reference values being the conclusive stage of a comprehensive and organized evaluation process of metal concentration in human body fluids. The EURO TERVIHT project (Trace Elements Reference Values in Human Tissues) was started for evaluating, checking and suggesting harmonized procedures for the establishment of trace element reference intervals in body fluids and tissues. Unfortunately, different statistical approaches are being used in this research field making data comparison difficult and in some cases impossible. Although international organizations such as International Federation of Clinical Chemistry (IFCC) or International Union of Pure and Applied Chemistry (IUPAC) have issued recommended guidelines for reference values assessment, including the statistical data treatment, a unique format and a standardized data layout is still missing. The aim of the present study is to present a software (BioReVa) running under Microsoft Windows platform suitable for calculating the reference intervals of trace elements in body matrices. The main scope for creating an ease-of-use application was to control the data distribution, to establish the reference intervals according to the accepted recommendation, on the base of the simple statistic, to get a standard presentation of experimental data and to have an application to which further need could be integrated in future. BioReVa calculates the IFCC reference intervals as well as the coverage intervals recommended by IUPAC as a supplement to the IFCC intervals. Examples of reference values and reference intervals calculated with BioReVa software concern Pb and Se in blood; Cd, In and Cr in urine, Hg and Mo in hair of different general European populations. University of Michigan
Fall risk as a function of time after admission to sub-acute geriatric hospital units.
Rapp, Kilian; Ravindren, Johannes; Becker, Clemens; Lindemann, Ulrich; Jaensch, Andrea; Klenk, Jochen
2016-10-07
There is evidence about time-dependent fracture rates in different settings and situations. Lacking are data about underlying time-dependent fall risk patterns. The objective of the study was to analyse fall rates as a function of time after admission to sub-acute hospital units and to evaluate the time-dependent impact of clinical factors at baseline on fall risk. This retrospective cohort study used data of 5,255 patients admitted to sub-acute units in a geriatric rehabilitation clinic in Germany between 2010 and 2014. Falls, personal characteristics and functional status at admission were extracted from the hospital information system. The rehabilitation stay was divided in 3-day time-intervals. The fall rate was calculated for each time-interval in all patients combined and in subgroups of patients. To analyse the influence of covariates on fall risk over time multivariate negative binomial regression models were applied for each of 5 time-intervals. The overall fall rate was 10.2 falls/1,000 person-days with highest fall risks during the first week and decreasing risks within the following weeks. A particularly pronounced risk pattern with high fall risks during the first days and decreasing risks thereafter was observed in men, disoriented people, and people with a low functional status or impaired cognition. In disoriented patients, for example, the fall rate decreased from 24.6 falls/1,000 person-days in day 2-4 to about 13 falls/1,000 person-days 2 weeks later. The incidence rate ratio of baseline characteristics changed also over time. Fall risk differs considerably over time during sub-acute hospitalisation. The strongest association between time and fall risk was observed in functionally limited patients with high risks during the first days after admission and declining risks thereafter. This should be considered in the planning and application of fall prevention measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Formanek, Martin; Vana, Martin; Houfek, Karel
2010-09-30
We compare efficiency of two methods for numerical solution of the time-dependent Schroedinger equation, namely the Chebyshev method and the recently introduced generalized Crank-Nicholson method. As a testing system the free propagation of a particle in one dimension is used. The space discretization is based on the high-order finite diferences to approximate accurately the kinetic energy operator in the Hamiltonian. We show that the choice of the more effective method depends on how many wave functions must be calculated during the given time interval to obtain relevant and reasonably accurate information about the system, i.e. on the choice of themore » time step.« less
Gruner, S V; Slone, D H; Capinera, J L; Turco, M P
2017-03-01
Chrysomya megacephala (Fabricius) is a forensically important fly that is found throughout the tropics and subtropics. We calculated the accumulated development time and transition points for each life stage from eclosion to adult emergence at five constant temperatures: 15, 20, 25, 30, and 35 °C. For each transition, the 10th, 50th, and 90th percentiles were calculated with a logistic linear model. The mean transition times and % survivorship were determined directly from the raw laboratory data. Development times of C. megacephala were compared with that of two other closely related species, Chrysomya rufifacies (Macquart) and Phormia regina (Meigen). Ambient and larval mass temperatures were collected from field studies conducted from 2001-2004. Field study data indicated that adult fly activity was reduced at lower ambient temperatures, but once a larval mass was established, heat generation occurred. These development times and durations can be used for estimation of a postmortem interval (PMI). © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A new stochastic model considering satellite clock interpolation errors in precise point positioning
NASA Astrophysics Data System (ADS)
Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong
2018-03-01
Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.
Anguita Sánchez, Manuel; Bertomeu Martínez, Vicente; Cequier Fillat, Ángel
2015-09-01
To study the prevalence of poorly controlled vitamin K antagonist anticoagulation in Spain in patients with nonvalvular atrial fibrillation, and to identify associated factors. We studied 1056 consecutive patients seen at 120 cardiology clinics in Spain between November 2013 and March 2014. We analyzed the international normalized ratio from the 6 months prior to the patient's visit, calculating the prevalence of poorly controlled anticoagulation, defined as < 65% time in therapeutic range using the Rosendaal method. Mean age was 73.6 years (standard deviation, 9.8 years); women accounted for 42% of patients. The prevalence of poorly controlled anticoagulation was 47.3%. Mean time in therapeutic range was 63.8% (25.9%). The following factors were independently associated with poorly controlled anticoagulation: kidney disease (odds ratio = 1.53; 95% confidence interval, 1.08-2.18; P = .018), routine nonsteroidal anti-inflammatory drugs (odds ratio = 1.79; 95% confidence interval, 1.20-2.79; P = .004), antiplatelet therapy (odds ratio = 2.16; 95% confidence interval, 1.49-3.12; P < .0001) and absence of angiotensin receptor blockers (odds ratio = 1.39; 95% confidence interval, 1.08-1.79; P = .011). There is a high prevalence of poorly controlled vitamin K antagonist anticoagulation in Spain. Factors associated with poor control are kidney disease, routine nonsteroidal anti-inflammatory drugs, antiplatelet use, and absence of angiotensin receptor blockers. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
The Effect of Birthrate Granularity on the Release- to- Birth Ratio for the AGR-1 In-core Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawn Scates; John Walter
The AGR-1 Advanced Gas Reactor (AGR) tristructural-isotropic-particle fuel experiment underwent 13 irradiation intervals from December 2006 until November 2009 within the Idaho National Laboratory Advanced Test Reactor in support of the Next Generation Nuclear Power Plant program. During this multi-year experiment, release-to-birth rate ratios were computed at the end of each operating interval to provide information about fuel performance. Fission products released during irradiation were tracked daily by the Fission Product Monitoring System using 8-hour measurements. Birth rates calculated by MCNP with ORIGEN for as-run conditions were computed at the end of each irradiation interval. Each time step in MCNPmore » provided neutron flux, reaction rates and AGR-1 compact composition, which were used to determine birth rates using ORIGEN. The initial birth-rate data, consisting of four values for each irradiation interval at the beginning, end, and two intermediate times, were interpolated to obtain values for each 8-hour activity. The problem with this method is that any daily changes in heat rates or perturbations, such as shim control movement or core/lobe power fluctuations, would not be reflected in the interpolated data and a true picture of the system would not be presented. At the conclusion of the AGR-1 experiment, great efforts were put forth to compute daily birthrates, which were reprocessed with the 8-hour release activity. The results of this study are presented in this paper.« less
The effect of birthrate granularity on the release-to-birth ratio for the AGR-1 in-core experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. M. Scates; J. B. Walter; J. T. Maki
The AGR-1 Advanced Gas Reactor (AGR) tristructural-isotropic-particle fuel experiment underwent 13 irradiation intervals from December 2006 until November 2009 within the Idaho National Laboratory Advanced Test Reactor in support of the Next Generation Nuclear Power Plant program. During this multi-year experiment, release-to-birth rate ratios were computed at the end of each operating interval to provide information about fuel performance. Fission products released during irradiation were tracked daily by the Fission Product Monitoring System using 8-h measurements. Birth rate calculated by MCNP with ORIGEN for as-run conditions were computed at the end of each irradiation interval. Each time step in MCNPmore » provided neutron flux, reaction rates and AGR-1 compact composition, which were used to determine birth rate using ORIGEN. The initial birth-rate data, consisting of four values for each irradiation interval at the beginning, end, and two intermediate times, were interpolated to obtain values for each 8-h activity. The problem with this method is that any daily changes in heat rates or perturbations, such as shim control movement or core/lobe power fluctuations, would not be reflected in the interpolated data and a true picture of the system would not be presented. At the conclusion of the AGR-1 experiment, great efforts were put forth to compute daily birthrates, which were reprocessed with the 8-h release activity. The results of this study are presented in this paper.« less
Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif
2017-05-01
Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.
Spencer, Kirk T; Weinert, Lynn; Avi, Victor Mor; Decara, Jeanne; Lang, Roberto M
2002-12-01
The Tei index is a combined measurement of systolic and diastolic left ventricular (LV) performance and may be more useful for the diagnosis of global cardiac dysfunction than either systolic or diastolic measures alone. We sought to determine whether the Tei index could be accurately calculated from LV area waveforms generated with automated border detection. Twenty-four patients were studied in 3 groups: systolic dysfunction, diastolic dysfunction, and normal. The Tei index was calculated both from Doppler tracings and from analysis of LV area waveforms. Excellent agreement was found between Doppler-derived timing intervals and the Tei index with those obtained from averaged LV area waveforms. A significant difference was seen in the Tei index, computed with both Doppler and automated border detection techniques, between the normal group and those with LV systolic dysfunction and subjects with isolated diastolic dysfunction. This study validates the use of LV area waveforms for the automated calculation of the Tei index.
NASA Astrophysics Data System (ADS)
Pradhan, Moumita; Pradhan, Dinesh; Bandyopadhyay, G.
2010-10-01
Fuzzy System has demonstrated their ability to solve different kinds of problem in various application domains. There is an increasing interest to apply fuzzy concept to improve tasks of any system. Here case study of a thermal power plant is considered. Existing time estimation represents time to complete tasks. Applying fuzzy linear approach it becomes clear that after each confidence level least time is taken to complete tasks. As time schedule is less than less amount of cost is needed. Objective of this paper is to show how one system becomes more efficient in applying Fuzzy Linear approach. In this paper we want to optimize the time estimation to perform all tasks in appropriate time schedules. For the case study, optimistic time (to), pessimistic time (tp), most likely time(tm) is considered as data collected from thermal power plant. These time estimates help to calculate expected time(te) which represents time to complete particular task to considering all happenings. Using project evaluation and review technique (PERT) and critical path method (CPM) concept critical path duration (CPD) of this project is calculated. This tells that the probability of fifty percent of the total tasks can be completed in fifty days. Using critical path duration and standard deviation of the critical path, total completion of project can be completed easily after applying normal distribution. Using trapezoidal rule from four time estimates (to, tm, tp, te), we can calculate defuzzyfied value of time estimates. For range of fuzzy, we consider four confidence interval level say 0.4, 0.6, 0.8,1. From our study, it is seen that time estimates at confidence level between 0.4 and 0.8 gives the better result compared to other confidence levels.
A comparison of pairs figure skaters in repeated jumps.
Sands, William A; Kimmel, Wendy L; McNeal, Jeni R; Murray, Steven Ross; Stone, Michael H
2012-01-01
Trends in pairs figure skating have shown that increasingly difficult jumps have become an essential aspect of high-level performance, especially in the latter part of a competitive program. We compared a repeated jump power index in a 60 s repeated jump test to determine the relationship of repeated jump test to competitive rank and to measure 2D hip, knee, and ankle angles and angular velocities at 0, 20, 40, and 60 s. Eighteen National Team Pairs Figure Skaters performed a 60 s repeated jump test on a large switch-mat with timing of flight and ground durations and digital video recording. Each 60-s period was divided into 6, 10-s intervals, with power indexes (W/kg) calculated for each 10-s interval. Power index by 10-s interval repeated measures ANOVAs (RMANOVA) showed that males exceeded females at all intervals, and the highest power index interval was during 10 to 20 s for both sexes. RMANOVAs of angles and angular velocities showed main effects for time only. Power index and jumping techniques among figure skaters showed rapid and steady declines over the test duration. Power index can predict approximately 50% of competitive rank variance, and sex differences in jumping technique were rare. Key pointsThe repeated jumps test can account for about 50% of the variance in pairs ranks.Changes in technique are largely due to fatigue, but the athletes were able to maintain a maximum flexion knee angle very close to the desired 90 degrees. Changes in angular velocity and jump heights occurred as expected, again probably due to fatigue.As expected from metabolic information, the athletes' power indexes peak around 20s and decline thereafter. Coaches should be aware of this time as a boundary beyond which fatigue becomes more manifest, and use careful choreographic choices to provide rest periods that are disguised as less demanding skating elements to afford recovery.The repeated jumps test may be a helpful off-ice test of power-endurance for figure skaters.
ON ASYMMETRY OF MAGNETIC HELICITY IN EMERGING ACTIVE REGIONS: HIGH-RESOLUTION OBSERVATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian Lirong; Alexander, David; Zhu Chunming
We employ the DAVE (differential affine velocity estimator) tracking technique on a time series of Michelson Doppler Imager (MDI)/1 minute high spatial resolution line-of-sight magnetograms to measure the photospheric flow velocity for three newly emerging bipolar active regions (ARs). We separately calculate the magnetic helicity injection rate of the leading and following polarities to confirm or refute the magnetic helicity asymmetry, found by Tian and Alexander using MDI/96 minute low spatial resolution magnetograms. Our results demonstrate that the magnetic helicity asymmetry is robust, being present in the three ARs studied, two of which have an observed balance of the magneticmore » flux. The magnetic helicity injection rate measured is found to depend little on the window size selected, but does depend on the time interval used between the two successive magnetograms being tracked. It is found that the measurement of the magnetic helicity injection rate performs well for a window size between 12 x 10 and 18 x 15 pixels and at a time interval {Delta}t = 10 minutes. Moreover, the short-lived magnetic structures, 10-60 minutes, are found to contribute 30%-50% of the magnetic helicity injection rate. Comparing with the results calculated by MDI/96 minute data, we find that the MDI/96 minute data, in general, can outline the main trend of the magnetic properties, but they significantly underestimate the magnetic flux in strong field regions and are not appropriate for quantitative tracking studies, so provide a poor estimate of the amount of magnetic helicity injected into the corona.« less
Radiological risk assessment of Capstone depleted uranium aerosols.
Hahn, Fletcher F; Roszell, Laurie E; Daxon, Eric G; Guilmette, Raymond A; Parkhurst, Mary Ann
2009-03-01
Assessment of the health risk from exposure to aerosols of depleted uranium (DU) is an important outcome of the Capstone aerosol studies that established exposure ranges to personnel in armored combat vehicles perforated by DU munitions. Although the radiation exposure from DU is low, there is concern that DU deposited in the body may increase cancer rates. Radiation doses to various organs of the body resulting from the inhalation of DU aerosols measured in the Capstone studies were calculated using International Commission on Radiological Protection (ICRP) models. Organs and tissues with the highest calculated committed equivalent 50-y doses were lung and extrathoracic tissues (nose and nasal passages, pharynx, larynx, mouth, and thoracic lymph nodes). Doses to the bone surface and kidney were about 5 to 10% of the doses to the extrathoracic tissues. Organ-specific risks were estimated using ICRP and U.S. Environmental Protection Agency (EPA) methodologies. Risks for crewmembers and first responders were determined for selected scenarios based on the time interval of exposure and for vehicle and armor type. The lung was the organ with the highest cancer mortality risk, accounting for about 97% of the risks summed from all organs. The highest mean lifetime risk for lung cancer for the scenario with the longest exposure time interval (2 h) was 0.42%. This risk is low compared with the natural or background risk of 7.35%. These risks can be significantly reduced by using an existing ventilation system (if operable) and by reducing personnel time in the vehicle immediately after perforation.
Chen, Hsin-Hua; Huang, Nicole; Chen, Yi-Ming; Chen, Tzeng-Ji; Chou, Pesus; Lee, Ya-Ling; Chou, Yiing-Jenq; Lan, Joung-Liang; Lai, Kuo-Lung; Lin, Ching-Heng; Chen, Der-Yuan
2013-07-01
To investigate the association between the risk of rheumatoid arthritis (RA) and a history of periodontitis. This nationwide, population-based, case-control study used administrative data to identify 13 779 newly diagnosed patients with RA (age ≥16 years) as the study group and 137 790 non-patients with RA matched for age, sex, and initial diagnosis date (index date) as controls. Using conditional logistic regression analysis after adjustment for potential confounders, including geographical region and a history of diabetes and Sjögren's syndrome, ORs with 95% CI were calculated to quantify the association between RA and periodontitis. To evaluate the effects of periodontitis severity and the lag time since the last periodontitis visit on RA development, ORs were calculated for subgroups of patients with periodontitis according to the number of visits, cumulative cost, periodontal surgery and time interval between the last periodontitis-related visit and the index date. An association was found between a history of periodontitis and newly diagnosed RA (OR=1.16; 95% CI 1.13 to 1.21). The strength of this association remained statistically significant after adjustment for potential confounders (OR=1.16; 95% CI 1.12 to 1.20), and after variation of periodontitis definitions. The association was dose- and time-dependent and was strongest when the interval between the last periodontitis-related visit and the index date was <3 months (OR=1.64; 95% CI 1.49 to 1.79). This study demonstrates an association between periodontitis and incident RA. This association is weak and limited to lack of individual smoking status.
General intelligence predicts memory change across sleep.
Fenn, Kimberly M; Hambrick, David Z
2015-06-01
Psychometric intelligence (g) is often conceptualized as the capability for online information processing but it is also possible that intelligence may be related to offline processing of information. Here, we investigated the relationship between psychometric g and sleep-dependent memory consolidation. Participants studied paired-associates and were tested after a 12-hour retention interval that consisted entirely of wake or included a regular sleep phase. We calculated the number of word-pairs that were gained and lost across the retention interval. In a separate session, participants completed a battery of cognitive ability tests to assess g. In the wake group, g was not correlated with either memory gain or memory loss. In the sleep group, we found that g correlated positively with memory gain and negatively with memory loss. Participants with a higher level of general intelligence showed more memory gain and less memory loss across sleep. Importantly, the correlation between g and memory loss was significantly stronger in the sleep condition than in the wake condition, suggesting that the relationship between g and memory loss across time is specific to time intervals that include sleep. The present research suggests that g not only reflects the capability for online cognitive processing, but also reflects capability for offline processes that operate during sleep.
A temporal discriminability account of children's eyewitness suggestibility.
Bright-Paul, Alexandra; Jarrold, Christopher
2009-07-01
Children's suggestibility is typically measured using a three-stage 'event-misinformation-test' procedure. We examined whether suggestibility is influenced by the time delays imposed between these stages, and in particular whether the temporal discriminability of sources (event and misinformation) predicts performance. In a novel approach, the degree of source discriminability was calculated as the relative magnitude of two intervals (the ratio of event-misinformation and misinformation-test intervals), based on an adaptation of existing 'ratio-rule' accounts of memory. Five-year-olds (n =150) watched an event, and were exposed to misinformation, before memory for source was tested. The absolute event-test delay (12 versus 24 days) and the 'ratio' of event-misinformation/misinformation-test intervals (11:1, 3:1, 1:1, 1:3 and 1:11) were manipulated across participants. The temporal discriminability of sources, measured by the ratio, was indeed a strong predictor of suggestibility. Most importantly, if the ratio was constant (e.g. 18/6 versus 9/3 days), performance was remarkably similar despite variations in absolute delay (e.g. 24 versus 12 days). This intriguing finding not only extends the ratio-rule of distinctiveness to misinformation paradigms, but also serves to illustrate a new empirical means of differentiating between explanations of suggestibility based on interference between sources and disintegration of source information over time.
Arifin, Nooranida; Abu Osman, Noor Azuan; Wan Abas, Wan Abu Bakar
2014-04-01
The measurements of postural balance often involve measurement error, which affects the analysis and interpretation of the outcomes. In most of the existing clinical rehabilitation research, the ability to produce reliable measures is a prerequisite for an accurate assessment of an intervention after a period of time. Although clinical balance assessment has been performed in previous study, none has determined the intrarater test-retest reliability of static and dynamic stability indexes during dominant single stance. In this study, one rater examined 20 healthy university students (female=12, male=8) in two sessions separated by 7 day intervals. Three stability indexes--the overall stability index (OSI), anterior/posterior stability index (APSI), and medial/ lateral stability index (MLSI) in static and dynamic conditions--were measured during single dominant stance. Intraclass correlation coefficient (ICC), standard error measurement (SEM) and 95% confidence interval (95% CI) were calculated. Test-retest ICCs for OSI, APSI, and MLSI were 0.85, 0.78, and 0.84 during static condition and were 0.77, 0.77, and 0.65 during dynamic condition, respectively. We concluded that the postural stability assessment using Biodex stability system demonstrates good-to-excellent test-retest reliability over a 1 week time interval.
Butt, K D; Bennett, K A; Crane, J M; Hutchens, D; Young, D C
1999-12-01
To compare labor induction intervals between oral misoprostol and intravenous oxytocin in women who present at term with premature rupture of membranes. One hundred eight women were randomly assigned to misoprostol 50 microg orally every 4 hours as needed or intravenous oxytocin. The primary outcome measure was time from induction to vaginal delivery. Sample size was calculated using a two-tailed alpha of 0.05 and power of 80%. Baseline demographic data, including maternal age, gestation, parity, Bishop score, birth weight, and group B streptococcal status, were similar. The mean time +/-standard deviation to vaginal birth with oral misoprostol was 720+/-382 minutes compared with 501+/-389 minutes with oxytocin (P = .007). The durations of the first, second, and third stages of labor were similar. There were no differences in maternal secondary outcomes, including cesarean birth (eight and seven, respectively), infection, maternal satisfaction with labor, epidural use, perineal trauma, manual placental removal, or gastrointestinal side effects. Neonatal outcomes including cord pH, Apgar scores, infection, and admission to neonatal intensive care unit were not different. Although labor induction with oral misoprostol was effective, oxytocin resulted in a shorter induction-to-delivery interval. Active labor intervals and other maternal and neonatal outcomes were similar.
A Long-Term Dissipation of the EUV He ii (30.4 nm) Segmentation in Full-Disk Solar Images
NASA Astrophysics Data System (ADS)
Didkovsky, Leonid
2018-06-01
Some quiet-Sun days observed by the Atmospheric Imaging Assembly (AIA) on-board the Solar Dynamics Observatory (SDO) during the time interval in 2010 - 2017 were used to continue our previous analyses reported by Didkovsky and Gurman ( Solar Phys. 289, 153, 2014a) and Didkovsky, Wieman, and Korogodina ( Solar Phys. 292, 32, 2017). The analysis consists of determining and comparing spatial spectral ratios (spectral densities over some time interval) from spatial (segmentation-cell length) power spectra. The ratios were compared using modeled compatible spatial frequencies for spectra from the Extreme ultraviolet Imaging Telescope (EIT) on-board the Solar and Heliospheric Observatory (SOHO) and from AIA images. With the new AIA data added to the EIT data we analyzed previously, the whole time interval from 1996 to 2017 reported here is approximately the length of two "standard" solar cycles (SC). The spectral ratios of segmentation-cell dimension structures show a significant and steady increase with no detected indication of SC-related returns to the values that characterize the SC minima. This increase in spatial power at high spatial frequencies is interpreted as a dissipation of medium-size EUV network structures to smaller-size structures in the transition region. Each of the latest ratio changes for 2010 through 2017 spectra calculated for a number of consecutive short-term intervals has been converted into monthly mean ratio (MMR) changes. The MMR values demonstrate variable sign and magnitudes, thus confirming the solar nature of the changes. These changes do not follow a "typical" trend of instrumental degradation or a long-term activity profile from the He ii (30.4 nm) irradiance measured by the Extreme ultraviolet Spectrophotometer (ESP) either. The ESP is a channel of the Extreme ultraviolet Variability Experiment (EVE) on-board SDO.
Charged particle tracking at Titan, and further applications
NASA Astrophysics Data System (ADS)
Bebesi, Zsofia; Erdos, Geza; Szego, Karoly
2016-04-01
We use the CAPS ion data of Cassini to investigate the dynamics and origin of Titan's atmospheric ions. We developed a 4th order Runge-Kutta method to calculate particle trajectories in a time reversed scenario. The test particle magnetic field environment imitates the curved magnetic environment in the vicinity of Titan. The minimum variance directions along the S/C trajectory have been calculated for all available Titan flybys, and we assumed a homogeneous field that is perpendicular to the minimum variance direction. Using this method the magnetic field lines have been calculated along the flyby orbits so we could select those observational intervals when Cassini and the upper atmosphere of Titan were magnetically connected. We have also taken the Kronian magnetodisc into consideration, and used different upstream magnetic field approximations depending on whether Titan was located inside of the magnetodisc current sheet, or in the lobe regions. We also discuss the code's applicability to comets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, D.A.
1988-02-01
Thermal maturity can be calculated with time-temperature indices (TTI) based on the Arrhenius equation using kinetics applicable to a range of Types II and III kerogens. These TTIs are compared with TTI calculations based on the Lopatin method and are related theoretically (and empirically via vitrinite reflectance) to the petroleum-generation window. The TTIs for both methods are expressed mathematically as integrals of temperature combined with variable linear heating rates for selected temperature intervals. Heating rates control the thermal-maturation trends of buried sediments. Relative to Arrhenius TTIs, Lopatin TTIs tend to underestimate thermal maturity at high heating rates and overestimate itmore » as low heating rates. Complex burial histories applicable to a range of tectonic environments illustrate the different exploration decisions that might be made on the basis of independent results of these two thermal-maturation models. 15 figures, 8 tables.« less
Thermal loading of natural streams
Jackman, Alan P.; Yotsukura, Nobuhiro
1977-01-01
The impact of thermal loading on the temperature regime of natural streams is investigated by mathematical models, which describe both transport (convection-diffusion) and decay (surface dissipation) of waste heat over 1-hour or shorter time intervals. The models are derived from the principle of conservation of thermal energy for application to one- and two-dimensional spaces. The basic concept in these models is to separate water temperature into two parts, (1) excess temperature due to thermal loading and (2) natural (ambient) temperature. This separation allows excess temperature to be calculated from the models without incoming radiation data. Natural temperature may either be measured in prototypes or calculated from the model. If use is made of the model, however, incoming radiation is required as input data. Comparison of observed and calculated temperatures in seven natural streams shows that the models are capable of predicting transient temperature regimes satisfactorily in most cases. (Woodard-USGS)
Frequency analysis via the method of moment functionals
NASA Technical Reports Server (NTRS)
Pearson, A. E.; Pan, J. Q.
1990-01-01
Several variants are presented of a linear-in-parameters least squares formulation for determining the transfer function of a stable linear system at specified frequencies given a finite set of Fourier series coefficients calculated from transient nonstationary input-output data. The basis of the technique is Shinbrot's classical method of moment functionals using complex Fourier based modulating functions to convert a differential equation model on a finite time interval into an algebraic equation which depends linearly on frequency-related parameters.
Synthesis and characterization of polypyrrole grafted chitin
NASA Astrophysics Data System (ADS)
Ramaprasad, A. T.; Latha, D.; Rao, Vijayalakshmi
2017-05-01
Synthesis and characterization of chitin grafted with polypyrrole (PPy) is reported in this paper. Chitin is soaked in pyrrole solution of various concentrations for different time intervals and polymerized using ammonium peroxy disulphate (APS) as an initiator. Grafting percentage of polypyrrole onto chitin is calculated from weight of chitin before and after grafting. Grafting of polymer is further verified by dissolution studies. The grafted polymer samples are characterized by FTIR, UV-Vis absorption spectrum, XRD, DSC, TGA, AFM, SEM and conductivity studies.
Evaluation of SAMe-TT2R2 Score on Predicting Success With Extended-Interval Warfarin Monitoring.
Hwang, Andrew Y; Carris, Nicholas W; Dietrich, Eric A; Gums, John G; Smith, Steven M
2018-06-01
In patients with stable international normalized ratios, 12-week extended-interval warfarin monitoring can be considered; however, predictors of success with this strategy are unknown. The previously validated SAMe-TT 2 R 2 score (considering sex, age, medical history, treatment, tobacco, and race) predicts anticoagulation control during standard follow-up (every 4 weeks), with lower scores associated with greater time in therapeutic range. To evaluate the ability of the SAMe-TT 2 R 2 score in predicting success with extended-interval warfarin follow-up in patients with previously stable warfarin doses. In this post hoc analysis of a single-arm feasibility study, baseline SAMe-TT 2 R 2 scores were calculated for patients with ≥1 extended-interval follow-up visit. The primary analysis assessed achieved weeks of extended-interval follow-up according to baseline SAMe-TT 2 R 2 scores. A total of 47 patients receiving chronic anticoagulation completed a median of 36 weeks of extended-interval follow-up. The median baseline SAMe-TT 2 R 2 score was 1 (range 0-5). Lower SAMe-TT 2 R 2 scores appeared to be associated with greater duration of extended-interval follow-up achieved, though the differences between scores were not statistically significant. No individual variable of the SAMe-TT 2 R 2 score was associated with achieved weeks of extended-interval follow-up. Analysis of additional patient factors found that longer duration (≥24 weeks) of prior stable treatment was significantly associated with greater weeks of extended-interval follow-up completed ( P = 0.04). Conclusion and Relevance: This pilot study provides limited evidence that the SAMe-TT 2 R 2 score predicts success with extended-interval warfarin follow-up but requires confirmation in a larger study. Further research is also necessary to establish additional predictors of successful extended-interval warfarin follow-up.
Joux, Julien; Boulanger, Marion; Jeannin, Severine; Chausson, Nicolas; Hennequin, Jean-Luc; Molinié, Vincent; Smadja, Didier; Touzé, Emmanuel; Olindo, Stephane
2016-10-01
Carotid bulb diaphragm (CBD) has been described in young carotid ischemic stroke (CIS) patients, especially in blacks. However, the prevalence of CBD in CIS patients is unknown, and whether CBD is a risk factor for CIS remains unclear. We assessed the association between CBD and incident CIS in a population-based study. We selected all young (<55 years) CIS patients from a 1-year population-based cohort study in the Afro-Caribbean population of Martinique in 2012. All patients had a comprehensive work-up including a computed tomographic angiography. We calculated CIS associated with ipsilateral CBD incidence with 95% confidence intervals using Poisson distribution. We then selected age- and sex-matched controls among young (<55 years) Afro-Caribbean stroke-free patients admitted for a road crash who routinely had computed tomographic angiography. Odds ratio (ORs) were calculated by conditional logistic regression adjusted for hypertension, dyslipidemia, diabetes and smoking. CIS associated with ipsilateral CBD incidence was 3.8 per 100 000 person-years (95% confidence interval, 1.4-6.1). Prevalence of ipsilateral CBD was 23% in all CIS and 37% in undetermined CIS patients. When restricted to undetermined CIS, CBD prevalence was 24 times higher than that in controls (adjusted OR, 24.1; 95% confidence interval, 1.8-325.6). CBD is associated with an increased risk of ipsilateral CIS in young Afro-Caribbean population. © 2016 American Heart Association, Inc.
NASA Technical Reports Server (NTRS)
Brown, Richard; Collier, Gary; Heckenlaible, Richard; Dougherty, Edward; Dolenz, James; Ross, Iain
2012-01-01
The ASCENT program solves the three-dimensional motion and attendant structural loading on a flexible vehicle incorporating, optionally, an active analog thrust control system, aerodynamic effects, and staging of multiple bodies. ASCENT solves the technical problems of loads, accelerations, and displacements of a flexible vehicle; staging of the upper stage from the lower stage; effects of thrust oscillations on the vehicle; a payload's relative motion; the effect of fluid sloshing on vehicle; and the effect of winds and gusts on the vehicle (on the ground or aloft) in a continuous analysis. The ATTACH ASCENT Loads program reads output from the ASCENT flexible body loads program, and calculates the approximate load indicators for the time interval under consideration. It calculates the load indicator values from pre-launch to the end of the first stage.
a New Approach to Physiologic Triggering in Medical Imaging Using Multiple Heart Sounds Alone.
NASA Astrophysics Data System (ADS)
Groch, Mark Walter
A new method for physiological synchronization of medical image acquisition using both the first and second heart sound has been developed. Heart sounds gating (HSG) circuitry has been developed which identifies, individually, both the first (S1) and second (S2) heart sounds from their timing relationship alone, and provides two synchronization points during the cardiac cycle. Identification of first and second heart sounds from their timing relationship alone and application to medical imaging has, heretofore, not been performed in radiology or nuclear medicine. The heart sounds are obtained as conditioned analog signals from a piezoelectric transducer microphone placed on the patient's chest. The timing relationships between the S1 to S2 pulses and the S2 to S1 pulses are determined using a logic scheme capable of distinguishing the S1 and S2 pulses from the heart sounds themselves, using their timing relationships, and the assumption that initially the S1-S2 interval will be shorter than the S2-S1 interval. Digital logic circuitry is utilized to continually track the timing intervals and extend the S1/S2 identification to heart rates up to 200 beats per minute (where the S1-S2 interval is not shorter than the S2-S1 interval). Clinically, first heart sound gating may be performed to assess the systolic ejection portion of the cardiac cycle, with S2 gating utilized for reproduction of the diastolic filling portion of the cycle. One application of HSG used for physiologic synchronization is in multigated blood pool (MGBP) imaging in nuclear medicine. Heart sounds gating has been applied to twenty patients who underwent analysis of ventricular function in Nuclear Medicine, and compared to conventional ECG gated MGBP. Left ventricular ejection fractions calculated from MGBP studies using a S1 and a S2 heart sound trigger correlated well with conventional ECG gated acquisitions in patients adequately gated by HSG and ECG. Heart sounds gating provided superior definition of the diastolic filling phase of the cardiac cycle by qualitative assessment of the left ventricular volume time -activity curves. Heart sounds physiological synchronization has potential to be used in other imaging modalities, such as magnetic resonance imaging, where the ECG is distorted due to the electromagnetic environment within the imager.
Simpson, Colin R; Steiner, Markus Fc; Cezard, Genevieve; Bansal, Narinder; Fischbacher, Colin; Douglas, Anne; Bhopal, Raj; Sheikh, Aziz
2015-10-01
There is evidence of substantial ethnic variations in asthma morbidity and the risk of hospitalisation, but the picture in relation to lower respiratory tract infections is unclear. We carried out an observational study to identify ethnic group differences for lower respiratory tract infections. A retrospective, cohort study. Scotland. 4.65 million people on whom information was available from the 2001 census, followed from May 2001 to April 2010. Hospitalisations and deaths (any time following first hospitalisation) from lower respiratory tract infections, adjusted risk ratios and hazard ratios by ethnicity and sex were calculated. We multiplied ratios and confidence intervals by 100, so the reference Scottish White population's risk ratio and hazard ratio was 100. Among men, adjusted risk ratios for lower respiratory tract infection hospitalisation were lower in Other White British (80, 95% confidence interval 73-86) and Chinese (69, 95% confidence interval 56-84) populations and higher in Pakistani groups (152, 95% confidence interval 136-169). In women, results were mostly similar to those in men (e.g. Chinese 68, 95% confidence interval 56-82), although higher adjusted risk ratios were found among women of the Other South Asians group (145, 95% confidence interval 120-175). Survival (adjusted hazard ratio) following lower respiratory tract infection for Pakistani men (54, 95% confidence interval 39-74) and women (31, 95% confidence interval 18-53) was better than the reference population. Substantial differences in the rates of lower respiratory tract infections amongst different ethnic groups in Scotland were found. Pakistani men and women had particularly high rates of lower respiratory tract infection hospitalisation. The reasons behind the high rates of lower respiratory tract infection in the Pakistani community are now required. © The Royal Society of Medicine.
Chosen interval methods for solving linear interval systems with special type of matrix
NASA Astrophysics Data System (ADS)
Szyszka, Barbara
2013-10-01
The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.
Reference values for 27 clinical chemistry tests in 70-year-old males and females.
Carlsson, Lena; Lind, Lars; Larsson, Anders
2010-01-01
Reference values are usually defined based on blood samples from healthy men or nonpregnant women in the age range of 20-50 years. These values are not optimal for elderly patients, as many biological markers change over time and adequate reference values are important for correct clinical decisions. To validate NORIP (Nordic Reference Interval Project) reference values in a 70-year-old population. We studied 27 frequently used laboratory tests. The 2.5th and 97.5th percentiles for these markers were calculated according to the recommendations of the International Federation of Clinical Chemistry on the statistical treatment of reference values. Reference values are reported for plasma alanine aminotransferase, albumin, alkaline phosphatase, pancreas amylase, apolipoprotein A1, apolipoprotein B, aspartate aminotransferase, bilirubin, calcium, chloride, cholesterol, creatinine, creatine kinase, C-reactive protein, glucose, gamma-glutamyltransferase, HDL-cholesterol, iron, lactate dehydrogenase, LDL-cholesterol, magnesium, phosphate, potassium, sodium, transferrin, triglycerides, urate and urea. Reference values calculated from the whole population and a subpopulation without cardiovascular disease showed strong concordance. Several of the reference interval limits were outside the 90% CI of a Scandinavian population (NORIP). 2009 S. Karger AG, Basel.
Operational Concept for Flight Crews to Participate in Merging and Spacing of Aircraft
NASA Technical Reports Server (NTRS)
Baxley, Brian T.; Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.
2006-01-01
The predicted tripling of air traffic within the next 15 years is expected to cause significant aircraft delays and create a major financial burden for the airline industry unless the capacity of the National Airspace System can be increased. One approach to improve throughput and reduce delay is to develop new ground tools, airborne tools, and procedures to reduce the variance of aircraft delivery to the airport, thereby providing an increase in runway throughput capacity and a reduction in arrival aircraft delay. The first phase of the Merging and Spacing Concept employs a ground based tool used by Air Traffic Control that creates an arrival time to the runway threshold based on the aircraft s current position and speed, then makes minor adjustments to that schedule to accommodate runway throughput constraints such as weather and wake vortex separation criteria. The Merging and Spacing Concept also employs arrival routing that begins at an en route metering fix at altitude and continues to the runway threshold with defined lateral, vertical, and velocity criteria. This allows the desired spacing interval between aircraft at the runway to be translated back in time and space to the metering fix. The tool then calculates a specific speed for each aircraft to fly while enroute to the metering fix based on the adjusted land timing for that aircraft. This speed is data-linked to the crew who fly this speed, causing the aircraft to arrive at the metering fix with the assigned spacing interval behind the previous aircraft in the landing sequence. The second phase of the Merging and Spacing Concept increases the timing precision of the aircraft delivery to the runway threshold by having flight crews using an airborne system make minor speed changes during enroute, descent, and arrival phases of flight. These speed changes are based on broadcast aircraft state data to determine the difference between the actual and assigned time interval between the aircraft pair. The airborne software then calculates a speed adjustment to null that difference over the remaining flight trajectory. Follow-on phases still under development will expand the concept to all types of aircraft, arriving from any direction, merging at different fixes and altitudes, and to any airport. This paper describes the implementation phases of the Merging and Spacing Concept, and provides high-level results of research conducted to date.
NASA Astrophysics Data System (ADS)
Hafezalkotob, Arian; Hafezalkotob, Ashkan
2017-06-01
A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleijnen, J; Asselen, B van; Burbach, M
2015-06-15
Purpose: Purpose of this study is to find the optimal trade-off between adaptation interval and margin reduction and to define the implications of motion for rectal cancer boost radiotherapy on a MR-linac. Methods: Daily MRI scans were acquired of 16 patients, diagnosed with rectal cancer, prior to each radiotherapy fraction in one week (N=76). Each scan session consisted of T2-weighted and three 2D sagittal cine-MRI, at begin (t=0 min), middle (t=9:30 min) and end (t=18:00 min) of scan session, for 1 minute at 2 Hz temporal resolution. Tumor and clinical target volume (CTV) were delineated on each T2-weighted scan andmore » transferred to each cine-MRI. The start frame of the begin scan was used as reference and registered to frames at time-points 15, 30 and 60 seconds, 9:30 and 18:00 minutes and 1, 2, 3 and 4 days later. Per time-point, motion of delineated voxels was evaluated using the deformation vector fields of the registrations and the 95th percentile distance (dist95%) was calculated as measure of motion. Per time-point, the distance that includes 90% of all cases was taken as estimate of required planning target volume (PTV)-margin. Results: Highest motion reduction is observed going from 9:30 minutes to 60 seconds. We observe a reduction in margin estimates from 10.6 to 2.7 mm and 16.1 to 4.6 mm for tumor and CTV, respectively, when adapting every 60 seconds compared to not adapting treatment. A 75% and 71% reduction, respectively. Further reduction in adaptation time-interval yields only marginal motion reduction. For adaptation intervals longer than 18:00 minutes only small motion reductions are observed. Conclusion: The optimal adaptation interval for adaptive rectal cancer (boost) treatments on a MR-linac is 60 seconds. This results in substantial smaller PTV-margin estimates. Adaptation intervals of 18:00 minutes and higher, show little improvement in motion reduction.« less
Infrared Sky Imager (IRSI) Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, Victor R.
2016-04-01
The Infrared Sky Imager (IRSI) deployed at the Atmospheric Radiation Measurement (ARM) Climate Research Facility is a Solmirus Corp. All Sky Infrared Visible Analyzer. The IRSI is an automatic, continuously operating, digital imaging and software system designed to capture hemispheric sky images and provide time series retrievals of fractional sky cover during both the day and night. The instrument provides diurnal, radiometrically calibrated sky imagery in the mid-infrared atmospheric window and imagery in the visible wavelengths for cloud retrievals during daylight hours. The software automatically identifies cloudy and clear regions at user-defined intervals and calculates fractional sky cover, providing amore » real-time display of sky conditions.« less
Frequency of pubic hair transfer during sexual intercourse.
Exline, D L; Smith, F P; Drexler, S G
1998-05-01
This study measured the frequency of pubic hair transfer between a limited number of consenting heterosexual partners. The results derive from controlled experiments with a number of human subjects rather than forensic casework. Standardized collection procedures were observed, situational variables were tracked. Participants (forensic laboratory employees and their spouses) were six Caucasian couples who collected their pubic hair combings immediately following intercourse. Subjects provided informed consent in accordance with the protocol for human subjects approved by the U.A.B. institutional review board. The experiment was replicated ten times for five couples, and five times for another couple (total n = 110). Transfer frequencies were calculated from instances where foreign (exogenous) hairs were observed. Results showed at least one exogenous pubic hair in 17.3% (19/110) of combings. Transfers to males (23.6%, or 13/55) were more prevalent than transfers to females (10.9%, or 6/55). Only once were transfers observed simultaneously between both male and female. A total of 28 exogenous pubic hairs were identified. Subjects reported intercourse duration of 2-25 min, intervening intervals of 1-240 h, pre-coital bathing intervals of 0.25-24 h, and predominantly missionary position (76%). No clear relationship among these other survey variables was observed. The prevalence of female-to-male pubic hair transfers suggests the importance of collecting pubic hair combings from the male suspects as well as from female victims, provided the time interval is not extreme. Even under these optimum collection conditions, pubic hair transfers were observed only 17.3% of the time.
Petersen, Christian C; Mistlberger, Ralph E
2017-08-01
The mechanisms that enable mammals to time events that recur at 24-h intervals (circadian timing) and at arbitrary intervals in the seconds-to-minutes range (interval timing) are thought to be distinct at the computational and neurobiological levels. Recent evidence that disruption of circadian rhythmicity by constant light (LL) abolishes interval timing in mice challenges this assumption and suggests a critical role for circadian clocks in short interval timing. We sought to confirm and extend this finding by examining interval timing in rats in which circadian rhythmicity was disrupted by long-term exposure to LL or by chronic intake of 25% D 2 O. Adult, male Sprague-Dawley rats were housed in a light-dark (LD) cycle or in LL until free-running circadian rhythmicity was markedly disrupted or abolished. The rats were then trained and tested on 15- and 30-sec peak-interval procedures, with water restriction used to motivate task performance. Interval timing was found to be unimpaired in LL rats, but a weak circadian activity rhythm was apparently rescued by the training procedure, possibly due to binge feeding that occurred during the 15-min water access period that followed training each day. A second group of rats in LL were therefore restricted to 6 daily meals scheduled at 4-h intervals. Despite a complete absence of circadian rhythmicity in this group, interval timing was again unaffected. To eliminate all possible temporal cues, we tested a third group of rats in LL by using a pseudo-randomized schedule. Again, interval timing remained accurate. Finally, rats tested in LD received 25% D 2 O in place of drinking water. This markedly lengthened the circadian period and caused a failure of LD entrainment but did not disrupt interval timing. These results indicate that interval timing in rats is resistant to disruption by manipulations of circadian timekeeping previously shown to impair interval timing in mice.
Sample entropy applied to the analysis of synthetic time series and tachograms
NASA Astrophysics Data System (ADS)
Muñoz-Diosdado, A.; Gálvez-Coyt, G. G.; Solís-Montufar, E.
2017-01-01
Entropy is a method of non-linear analysis that allows an estimate of the irregularity of a system, however, there are different types of computational entropy that were considered and tested in order to obtain one that would give an index of signals complexity taking into account the data number of the analysed time series, the computational resources demanded by the method, and the accuracy of the calculation. An algorithm for the generation of fractal time-series with a certain value of β was used for the characterization of the different entropy algorithms. We obtained a significant variation for most of the algorithms in terms of the series size, which could result counterproductive for the study of real signals of different lengths. The chosen method was sample entropy, which shows great independence of the series size. With this method, time series of heart interbeat intervals or tachograms of healthy subjects and patients with congestive heart failure were analysed. The calculation of sample entropy was carried out for 24-hour tachograms and time subseries of 6-hours for sleepiness and wakefulness. The comparison between the two populations shows a significant difference that is accentuated when the patient is sleeping.
NASA Technical Reports Server (NTRS)
Ryabenkii, V. S.; Turchaninov, V. I.; Tsynkov, S. V.
1999-01-01
We propose a family of algorithms for solving numerically a Cauchy problem for the three-dimensional wave equation. The sources that drive the equation (i.e., the right-hand side) are compactly supported in space for any given time; they, however, may actually move in space with a subsonic speed. The solution is calculated inside a finite domain (e.g., sphere) that also moves with a subsonic speed and always contains the support of the right-hand side. The algorithms employ a standard consistent and stable explicit finite-difference scheme for the wave equation. They allow one to calculate tile solution for arbitrarily long time intervals without error accumulation and with the fixed non-growing amount of tile CPU time and memory required for advancing one time step. The algorithms are inherently three-dimensional; they rely on the presence of lacunae in the solutions of the wave equation in oddly dimensional spaces. The methodology presented in the paper is, in fact, a building block for constructing the nonlocal highly accurate unsteady artificial boundary conditions to be used for the numerical simulation of waves propagating with finite speed over unbounded domains.
A simple method to calculate first-passage time densities with arbitrary initial conditions
NASA Astrophysics Data System (ADS)
Nyberg, Markus; Ambjörnsson, Tobias; Lizana, Ludvig
2016-06-01
Numerous applications all the way from biology and physics to economics depend on the density of first crossings over a boundary. Motivated by the lack of general purpose analytical tools for computing first-passage time densities (FPTDs) for complex problems, we propose a new simple method based on the independent interval approximation (IIA). We generalise previous formulations of the IIA to include arbitrary initial conditions as well as to deal with discrete time and non-smooth continuous time processes. We derive a closed form expression for the FPTD in z and Laplace-transform space to a boundary in one dimension. Two classes of problems are analysed in detail: discrete time symmetric random walks (Markovian) and continuous time Gaussian stationary processes (Markovian and non-Markovian). Our results are in good agreement with Langevin dynamics simulations.
Brand, Andrew; Bradley, Michael T
2016-02-01
Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.
Double dynamic scaling in human communication dynamics
NASA Astrophysics Data System (ADS)
Wang, Shengfeng; Feng, Xin; Wu, Ye; Xiao, Jinhua
2017-05-01
In the last decades, human behavior has been deeply understanding owing to the huge quantities data of human behavior available for study. The main finding in human dynamics shows that temporal processes consist of high-activity bursty intervals alternating with long low-activity periods. A model, assuming the initiator of bursty follow a Poisson process, is widely used in the modeling of human behavior. Here, we provide further evidence for the hypothesis that different bursty intervals are independent. Furthermore, we introduce a special threshold to quantitatively distinguish the time scales of complex dynamics based on the hypothesis. Our results suggest that human communication behavior is a composite process of double dynamics with midrange memory length. The method for calculating memory length would enhance the performance of many sequence-dependent systems, such as server operation and topic identification.
Sato, Takako; Zaitsu, Kei; Tsuboi, Kento; Nomura, Masakatsu; Kusano, Maiko; Shima, Noriaki; Abe, Shuntaro; Ishii, Akira; Tsuchihashi, Hitoshi; Suzuki, Koichi
2015-05-01
Estimation of postmortem interval (PMI) is an important goal in judicial autopsy. Although many approaches can estimate PMI through physical findings and biochemical tests, accurate PMI calculation by these conventional methods remains difficult because PMI is readily affected by surrounding conditions, such as ambient temperature and humidity. In this study, Sprague-Dawley (SD) rats (10 weeks) were sacrificed by suffocation, and blood was collected by dissection at various time intervals (0, 3, 6, 12, 24, and 48 h; n = 6) after death. A total of 70 endogenous metabolites were detected in plasma by gas chromatography-tandem mass spectrometry (GC-MS/MS). Each time group was separated from each other on the principal component analysis (PCA) score plot, suggesting that the various endogenous metabolites changed with time after death. To prepare a prediction model of a PMI, a partial least squares (or projection to latent structure, PLS) regression model was constructed using the levels of significantly different metabolites determined by variable importance in the projection (VIP) score and the Kruskal-Wallis test (P < 0.05). Because the constructed PLS regression model could successfully predict each PMI, this model was validated with another validation set (n = 3). In conclusion, plasma metabolic profiling demonstrated its ability to successfully estimate PMI under a certain condition. This result can be considered to be the first step for using the metabolomics method in future forensic casework.
Anagnostakis, Ioannis; Papassavas, Andreas C; Michalopoulos, Efstathios; Chatzistamatiou, Theofanis; Andriopoulou, Sofia; Tsakris, Athanassios; Stavropoulos-Giokas, Catherine
2014-01-01
Cord blood (CB) units are stored from weeks to years in liquid- or vapor-phase nitrogen until they are used for transplantation. We examined the effects of cryostorage in a mechanical freezer at -150°C on critical quality control variables of CB collections to investigate the possible use of mechanical freezers at -150°C as an alternative to storage in liquid- (or vapor-) phase nitrogen. A total of 105 CB units were thawed and washed at different time intervals (6, 12, 24, and 36 months). For every thawed CB unit, samples were removed and cell enumeration (total nucleated cells [TNCs], mononuclear cells [MNCs], CD34+, CD133+) was performed. In addition, viability was obtained with the use of flow cytometry, and recoveries were calculated. Also, total absolute colony-forming unit counts were performed and progenitor cell recoveries were studied by clonogenic assays. Significant differences (p < 0.05) were observed in certain variables (TNCs, MNC numbers, viability) when they were examined in relation with time intervals, while others (CD34+, CD133+) were relatively insensitive (p = NS) to the duration of time interval the CB units were kept in cryostorage condition. The data presented suggest that cryopreservation of CB units in a mechanical freezer at -150°C may represent an alternative cryostorage condition for CB cryopreservation. © 2013 American Association of Blood Banks.
User's manual for SEDCALC, a computer program for computation of suspended-sediment discharge
Koltun, G.F.; Gray, John R.; McElhone, T.J.
1994-01-01
Sediment-Record Calculations (SEDCALC), a menu-driven set of interactive computer programs, was developed to facilitate computation of suspended-sediment records. The programs comprising SEDCALC were developed independently in several District offices of the U.S. Geological Survey (USGS) to minimize the intensive labor associated with various aspects of sediment-record computations. SEDCALC operates on suspended-sediment-concentration data stored in American Standard Code for Information Interchange (ASCII) files in a predefined card-image format. Program options within SEDCALC can be used to assist in creating and editing the card-image files, as well as to reformat card-image files to and from formats used by the USGS Water-Quality System. SEDCALC provides options for creating card-image files containing time series of equal-interval suspended-sediment concentrations from 1. digitized suspended-sediment-concentration traces, 2. linear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals, and 3. nonlinear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals. Suspended-sediment discharge can be computed from the streamflow and suspended-sediment-concentration data or by application of transport relations derived by regressing log-transformed instantaneous streamflows on log-transformed instantaneous suspended-sediment concentrations or discharges. The computed suspended-sediment discharge data are stored in card-image files that can be either directly imported to the USGS Automated Data Processing System or used to generate plots by means of other SEDCALC options.
Napier, Bruce A; Eslinger, Paul W; Tolstykh, Evgenia I; Vorobiova, Marina I; Tokareva, Elena E; Akhramenko, Boris N; Krivoschapov, Victor A; Degteva, Marina O
2017-11-01
Time-dependent thyroid doses were reconstructed for over 29,000 Techa River Cohort members living near the Mayak production facilities from 131 I released to the atmosphere for all relevant exposure pathways. The calculational approach uses four general steps: 1) construct estimates of releases of 131 I to the air from production facilities; 2) model the transport of 131 I in the air and subsequent deposition on the ground and vegetation; 3) model the accumulation of 131 I in environmental media; and 4) calculate individualized doses. The dose calculations are implemented in a Monte Carlo framework that produces best estimates and confidence intervals of dose time-histories. Other radionuclide contributors to thyroid dose were evaluated. The 131 I contribution was 75-99% of the thyroid dose. The mean total thyroid dose for cohort members was 193 mGy and the median was 53 mGy. Thyroid doses for about 3% of cohort members were larger than 1 Gy. About 7% of children born in 1940-1950 had doses larger than 1 Gy. The uncertainty in the 131 I dose estimates is low enough for this approach to be used in regional epidemiological studies. Copyright © 2017. Published by Elsevier Ltd.
Specific agreement on dichotomous outcomes can be calculated for more than two raters.
de Vet, Henrica C W; Dikmans, Rieky E; Eekhout, Iris
2017-03-01
For assessing interrater agreement, the concepts of observed agreement and specific agreement have been proposed. The situation of two raters and dichotomous outcomes has been described, whereas often, multiple raters are involved. We aim to extend it for more than two raters and examine how to calculate agreement estimates and 95% confidence intervals (CIs). As an illustration, we used a reliability study that includes the scores of four plastic surgeons classifying photographs of breasts of 50 women after breast reconstruction into "satisfied" or "not satisfied." In a simulation study, we checked the hypothesized sample size for calculation of 95% CIs. For m raters, all pairwise tables [ie, m (m - 1)/2] were summed. Then, the discordant cells were averaged before observed and specific agreements were calculated. The total number (N) in the summed table is m (m - 1)/2 times larger than the number of subjects (n), in the example, N = 300 compared to n = 50 subjects times m = 4 raters. A correction of n√(m - 1) was appropriate to find 95% CIs comparable to bootstrapped CIs. The concept of observed agreement and specific agreement can be extended to more than two raters with a valid estimation of the 95% CIs. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Napier, Bruce A.; Eslinger, Paul W.; Tolstykh, Evgenia I.
Time-dependent thyroid doses were reconstructed for Techa River Cohort members living near the Mayak production facilities from 131I released to the atmosphere for all relevant exposure pathways. The calculational approach uses four general steps: 1) construct estimates of releases of 131I to the air from production facilities; 2) model the transport of 131I in the air and subsequent deposition on the ground and vegetation; 3) model the accumulation of 131I in soil, water, and food products (environmental media); and 4) calculate individual doses by matching appropriate lifestyle and consumption data for the individual to concentrations of 131I in environmental media.more » The dose calculations are implemented in a Monte Carlo framework that produces best estimates and confidence intervals of dose time-histories. The 131I contribution was 75-99% of the thyroid dose. The mean total thyroid dose for cohort members was 193 mGy and the median was 53 mGy. Thyroid doses for about 3% of cohort members were larger than 1 Gy. About 7% of children born in 1940-1950 had doses larger than 1 Gy. The uncertainty in the 131I dose estimates is low enough for this approach to be used in regional epidemiological studies.« less
Flood of May 2006 in York County, Maine
Stewart, Gregory J.; Kempf, Joshua P.
2008-01-01
A stalled low-pressure system over coastal New England on Mother's Day weekend, May 13-15, 2006, released rainfall in excess of 15 inches. This flood (sometimes referred to as the 'Mother's Day flood') caused widespread damage to homes, businesses, roads, and structures in southern Maine. The damage to public property in York County was estimated to be $7.5 million. As a result of these damages, a presidential disaster declaration was enacted on May 25, 2006, for York County, Maine. Peak-flow recurrence intervals for eight of the nine streams studied were calculated to be greater than 500 years. The peak-flow recurrence interval of the remaining stream was calculated to be between a 100-year and a 500-year interval. This report provides a detailed description of the May 2006 flood in York County, Maine. Information is presented on peak streamflows and peak-flow recurrence intervals on nine streams, peak water-surface elevations for 80 high-water marks at 25 sites, hydrologic conditions before and after the flood, comparisons with published Flood Insurance Studies, and places the May 2006 flood in context with historical floods in York County. At sites on several streams, differences were observed between peak flows published in the Flood Insurance Studies and those calculated for this study. The differences in the peak flows from the published Flood Insurance Studies and the flows calculated for this report are within an acceptable range for flows calculated at ungaged locations, with the exception of those for the Great Works River and Merriland River. For sites on the Mousam River, Blacksmith Brook, Ogunquit River, and Cape Neddick River, water-surface elevations from Flood Insurance Studies differed with documented water-surface elevations from the 2006 flood.
Taneja, Sonali; Mishra, Neha; Malik, Shubhra
2014-01-01
Introduction: Irrigation plays an indispensable role in removal of tissue remnants and debris from the complicated root canal system. This study compared the human pulp tissue dissolution by different concentrations of chlorine dioxide, calcium hypochlorite and sodium hypochlorite. Materials and Methods: Pulp tissue was standardized to a weight of 9 mg for each sample. In all,60 samples obtained were divided into 6 groups according to the irrigating solution used- 2.5% sodium hypochlorite (NaOCl), 5.25% NaOCl, 5% calcium hypochlorite (Ca(OCl)2), 10% Ca(OCl)2, 5%chlorine dioxide (ClO2) and 13% ClO2. Pulp tissue was placed in each test tube carrying irrigants of measured volume (5ml) according to their specified subgroup time interval: 30 minutes (Subgroup A) and 60 minutes (Subgroup B). The solution from each sample test tube was filtered and was left for drying overnight. The residual weight was calculated by filtration method. Results: Mean tissue dissolution increases with increase in time period. Results showed 5.25% NaOCl to be most effective at both time intervals followed by 2.5% NaOCl at 60 minutes, 10%Ca(OCl)2 and 13% ClO2 at 60 minutes. Least amount of tissue dissolving ability was demonstrated by 5% Ca(OCl)2 and 5% ClO2 at 30 minutes. Distilled water showed no pulp tissue dissolution. Conclusion: Withinthe limitations of the study, NaOCl most efficiently dissolved the pulp tissue at both concentrations and at both time intervals. Mean tissue dissolution by Ca(OCl)2 and ClO2 gradually increased with time and with their increase in concentration. PMID:25506141
Park, Ji Hyun; Hwang, Gyu-Sam
2016-08-01
A blood pressure (BP) waveform contains various pieces of information related to respiratory variation. Systolic time interval (STI) reflects myocardial performance, and diastolic time interval (DTI) represents diastolic filling. This study examined whether respiratory variations of STI and DTI within radial arterial waveform are comparable to dynamic indices. During liver transplantation, digitally recorded BP waveform and stroke volume variation (SVV) were retrospectively analyzed. Beat-to-beat STI and DTI were extracted within each BP waveform, which were separated by dicrotic notch. Systolic time variation (STV) was calculated by the average of 3 consecutive respiratory cycles: [(STImax- STImin)/STImean]. Similar formula was used for diastolic time variation (DTV) and pulse pressure variation (PPV). Receiver operating characteristic analysis with area under the curve (AUC) was used to assess thresholds predictive of SVV ≥12% and PPV ≥12%. STV and DTV showed significant correlations with SVV (r= 0.78 and r= 0.67, respectively) and PPV (r= 0.69 and r= 0.69, respectively). Receiver operating characteristic curves demonstrated that STV ≥11% identified to predict SVV ≥12% with 85.7% sensitivity and 89.3% specificity (AUC = 0.935; P< .001). DTV ≥11% identified to predict SVV ≥12% with 71.4% sensitivity and 85.7% specificity (AUC = 0.829; P< .001). STV ≥12% and DTV ≥11% identified to predict PPV ≥12% with an AUC of 0.881 and 0.885, respectively. Respiratory variations of STI and DTI derived from radial arterial contour have a potential to predict hemodynamic response as a surrogate for SVV or PPV. Copyright © 2016 Elsevier Inc. All rights reserved.
Magnetic Resonance Fingerprinting with short relaxation intervals.
Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter
2017-09-01
The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially resolved MRF. Copyright © 2017 Elsevier Inc. All rights reserved.
Scheurer, Eva; Ith, Michael; Dietrich, Daniel; Kreis, Roland; Hüsler, Jürg; Dirnhofer, Richard; Boesch, Chris
2005-05-01
Knowledge of the time interval from death (post-mortem interval, PMI) has an enormous legal, criminological and psychological impact. Aiming to find an objective method for the determination of PMIs in forensic medicine, 1H-MR spectroscopy (1H-MRS) was used in a sheep head model to follow changes in brain metabolite concentrations after death. Following the characterization of newly observed metabolites (Ith et al., Magn. Reson. Med. 2002; 5: 915-920), the full set of acquired spectra was analyzed statistically to provide a quantitative estimation of PMIs with their respective confidence limits. In a first step, analytical mathematical functions are proposed to describe the time courses of 10 metabolites in the decomposing brain up to 3 weeks post-mortem. Subsequently, the inverted functions are used to predict PMIs based on the measured metabolite concentrations. Individual PMIs calculated from five different metabolites are then pooled, being weighted by their inverse variances. The predicted PMIs from all individual examinations in the sheep model are compared with known true times. In addition, four human cases with forensically estimated PMIs are compared with predictions based on single in situ MRS measurements. Interpretation of the individual sheep examinations gave a good correlation up to 250 h post-mortem, demonstrating that the predicted PMIs are consistent with the data used to generate the model. Comparison of the estimated PMIs with the forensically determined PMIs in the four human cases shows an adequate correlation. Current PMI estimations based on forensic methods typically suffer from uncertainties in the order of days to weeks without mathematically defined confidence information. In turn, a single 1H-MRS measurement of brain tissue in situ results in PMIs with defined and favorable confidence intervals in the range of hours, thus offering a quantitative and objective method for the determination of PMIs. Copyright 2004 John Wiley & Sons, Ltd.
de Hoon, Jan; Van Hecken, Anne; Vandermeulen, Corinne; Herbots, Marissa; Kubo, Yumi; Lee, Ed; Eisele, Osa; Vargas, Gabriel; Gabriel, Kristin
2018-01-01
Objectives The aim of this study was to assess the effects of concomitant administration of erenumab and sumatriptan on resting blood pressure, pharmacokinetics, safety, and tolerability in healthy subjects. Methods In this phase 1, parallel-group, one-way crossover, double-blind, placebo-controlled study, healthy adult subjects were randomized (1:2) to receive either intravenous placebo and subcutaneous sumatriptan 12 mg (i.e. two 6-mg injections separated by 1 hour) or intravenous erenumab 140 mg and subcutaneous sumatriptan 12 mg. Blood pressure was measured pre-dose and at prespecified times post-dose. The primary endpoint was individual time-weighted averages of mean arterial pressure, measured from 0 hours to 2.5 hours after the first dose of sumatriptan. Pharmacokinetic parameters for sumatriptan were evaluated by calculating geometric mean ratios (erenumab and sumatriptan/placebo and sumatriptan). Adverse events and anti-erenumab antibodies were also evaluated. Results A total of 34 subjects were randomized and included in the analysis. Least squares mean (standard error) time-weighted averages of mean arterial pressure were 87.4 (1.0) mmHg for the placebo and sumatriptan group and 87.4 (1.2) mmHg for the erenumab and sumatriptan group. Mean difference in mean arterial pressure between groups was -0.04 mmHg (90% confidence interval: -2.2, 2.1). Geometric mean ratio estimates for maximum plasma concentration of sumatriptan was 0.95 (90% confidence interval: 0.82, 1.09), area under the plasma concentration-time curve (AUC) from time 0 to 6 hours was 0.98 (90% confidence interval: 0.93, 1.03), and AUC from time 0 to infinity was 1.00 (90% confidence interval: 0.96, 1.05). No clinically relevant safety findings for co-administration of sumatriptan and erenumab were identified. Conclusion Co-administration of erenumab and sumatriptan had no additional effect on resting blood pressure or on pharmacokinetics of sumatriptan. ClinicalTrials.gov, NCT02741310.
Inactivation of Salmonella during cocoa roasting and chocolate conching.
Nascimento, Maristela da Silva do; Brum, Daniela Merlo; Pena, Pamela Oliveira; Berto, Maria Isabel; Efraim, Priscilla
2012-10-15
The high heat resistance of Salmonella in foods with low water activity raises particular issues for food safety, especially chocolate, where outbreak investigations indicate that few colony-forming units are necessary to cause salmonellosis. This study evaluated the efficiency of cocoa roasting and milk chocolate conching in the inactivation of Salmonella 5-strain suspension. Thermal resistance of Salmonella was greater in nibs compared to cocoa beans upon exposure at 110 to 130°C. The D-values in nibs were 1.8, 2.2 and 1.5-fold higher than those calculated for cocoa beans at 110, 120 and 130°C. There was no significant difference (p>0.05) between the matrices only at 140°C. Since in the conching of milk chocolate the inactivation curves showed rapid death in the first 180 min followed by a lower inactivation rate, and two D-values were calculated. For the first time interval (0-180 min) the D-values were 216.87, 102.27 and 50.99 min at 50, 60 and 70°C, respectively. The other D-values were determined from the second time interval (180-1440 min), 1076.76 min at 50°C, 481.94 min at 60°C and 702.23 min at 70°C. The results demonstrated that the type of matrix, the process temperature and the initial count influenced the Salmonella resistance. Copyright © 2012 Elsevier B.V. All rights reserved.
Hamilton, Matthew T; Kupar, Caitlin A; Kelley, Meghan D; Finger, John W; Tuberville, Tracey D
2016-07-01
: American alligators ( Alligator mississippiensis ) are one of the most studied crocodilian species in the world, yet blood and plasma biochemistry information is limited for juvenile alligators in their northern range, where individuals may be exposed to extreme abiotic and biotic stressors. We collected blood samples over a 2-yr period from 37 juvenile alligators in May, June, and July to establish reference intervals for 22 blood and plasma analytes. We observed no effect of either sex or blood collection time on any analyte investigated. However, our results indicate a significant correlation between a calculated body condition index and aspartate aminotransferase and creatine kinase. Glucose, total protein, and potassium varied significantly between sampling sessions. In addition, glucose and potassium were highly correlated between the two point-of-care devices used, although they were significantly lower with the i-STAT 1 CG8+ cartridge than with the Vetscan VS2 Avian/Reptile Rotor. The reference intervals presented herein should provide baseline data for evaluating wild juvenile alligators in the northern portion of their range.
NASA Technical Reports Server (NTRS)
Fichtl, G. H.
1971-01-01
Statistical estimates of wind shear in the planetary boundary layer are important in the design of V/STOL aircraft, and for the design of the Space Shuttle. The data analyzed in this study consist of eleven sets of longitudinal turbulent velocity fluctuation time histories digitized at 0.2 sec intervals with approximately 18,000 data points per time history. The longitudinal velocity fluctuations were calculated with horizontal wind and direction data collected at the 18-, 30-, 60-, 90-, 120-, and 150-m levels. The data obtained confirm the result that Eulerian time spectra transformed to wave-number spectra with Taylor's frozen eddy hypothesis possess inertial-like behavior at wave-numbers well out of the inertial subrange.
Kozacıoğlu, Zafer; Ceylan, Yasin; Aydoğdu, Özgü; Bolat, Deniz; Günlüsoy, Bülent; Minareci, Süleyman
2017-03-01
We updated our data on penile fractures and investigated the significance of the time interval from the incident of the fracture until the operation on the erectile functions and long-term complications. Between January 2001 and June 2014, 64 patients were operated on with a preoperative diagnosis of penile fracture. We could evaluate 54 of these patients. The patients were classified into 3 groups according to the time interval from the time of fracture until surgery. The validated Turkish version of the erectile components of International Index of Erectile Function (IIEF) was answered by every patient 3 times after the surgery; before the incident of fracture, at first postoperative year, and at the time of the study (IIEF-5 and question #15 were used). The complications were noted and an erectile function index score was calculated for every patient. Mean follow up period was 44.9 (±2.8) months for all patients There was no statistically significant difference between the 3 groups in terms of the erectile components of IIEF questionnaire scores for the time periods and for individual patients in each separate group. Complications for all groups were also similar. In consideration of long-term results, neither serious deformities nor erectile dysfunction occur as a consequence of a delay in surgery performed within the first 24 hours in patients without urethral involvement.
Minimax rational approximation of the Fermi-Dirac distribution.
Moussa, Jonathan E
2016-10-28
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ -1 )) poles to achieve an error tolerance ϵ at temperature β -1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δ occ , the occupied energy interval. This is particularly beneficial when Δ ≫ Δ occ , such as in electronic structure calculations that use a large basis set.
Minimax rational approximation of the Fermi-Dirac distribution
NASA Astrophysics Data System (ADS)
Moussa, Jonathan E.
2016-10-01
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ɛ-1)) poles to achieve an error tolerance ɛ at temperature β-1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. This is particularly beneficial when Δ ≫ Δocc, such as in electronic structure calculations that use a large basis set.
Critical evaluation of measured line positions of 14N16O in X2П state
NASA Astrophysics Data System (ADS)
Sulakshina, O. N.; Borkov, Yu. G.
2018-04-01
All available line positions for unresolved and resolved Λ-doublets of the 14N16O molecule in the X2 П state were collected from the literature and tested using the RITZ computer code. These data have been critically analysed and used to obtain the most complete set of 1789 experimental energy levels of unresolved Λ-doublets covering the 0-35,866 cm-1 interval. A set of 425 experimental energy levels of resolved Λ-doublets covering the 0-5957 cm-1 interval for two states 2П1/2 and 2П3/2 also have been obtained. These levels together with calculated correlation matrix can be used to generate the precise list of transitions with confidence intervals. Comparisons with the HITRAN as well as with Amiot calculations are discussed. The systematic shift between experimental energy levels of unresolved Λ-doublets and those calculated by Amiot for 2П3/2 state was found. The same systematic shift for transitions frequencies of unresolved Λ-doublets in forbidden subbands 2П1/2↔2П3/2 is also established in the HITRAN database. Comparison of the RITZ energy levels with calculated energy levels by Wong at al. was also done. It was found, that experimental RITZ energy levels for resolved Λ-doublets of 14N16O coincide with those calculated by Wong at al. within experimental uncertainties.
Quantitative analysis of eosinophil chemotaxis tracked using a novel optical device -- TAXIScan.
Nitta, Nao; Tsuchiya, Tomoko; Yamauchi, Akira; Tamatani, Takuya; Kanegasaki, Shiro
2007-03-30
We have reported previously the development of an optically accessible, horizontal chemotaxis apparatus, in which migration of cells in the channel from a start line can be traced with time-lapse intervals using a CCD camera (JIM 282, 1-11, 2003). To obtain statistical data of migrating cells, we have developed quantitative methods to calculate various parameters in the process of chemotaxis, employing human eosinophil and CXCL12 as a model cell and a model chemoattractant, respectively. Median values of velocity and directionality of each cell within an experimental period could be calculated from the migratory pathway data obtained from time-lapse images and the data were expressed as Velocity-Directionality (VD) plot. This plot is useful for quantitatively analyzing multiple migrating cells exposed to a certain chemoattractant, and can distinguish chemotaxis from random migration. Moreover precise observation of cell migration revealed that each cell had a different lag period before starting chemotaxis, indicating variation in cell sensitivity to the chemoattractant. Thus lag time of each cell before migration, and time course of increment of the migrating cell ratio at the early stages could be calculated. We also graphed decrement of still moving cell ratio at the later stages by calculating the duration time of cell migration of each cell. These graphs could distinguish different motion patterns of chemotaxis of eosinophils, in response to a range of chemoattractants; PGD(2), fMLP, CCL3, CCL5 and CXCL12. Finally, we compared parameters of eosinophils from normal volunteers, allergy patients and asthma patients and found significant difference in response to PGD(2). The quantitative methods described here could be applicable to image data obtained with any combination of cells and chemoattractants and useful not only for basic studies of chemotaxis but also for diagnosis and for drug screening.
Pentikis, Helen S; Simmons, Roy D; Benedict, Michael F; Hatch, Simon J
2002-04-01
To determine the single-dose bioavailability of 20-mg Metadate CD (methylphenidate HCI, USP) Extended-Release Capsules sprinkled onto 1 level tablespoon (15 mL) of applesauce relative to an intact capsule under fasted conditions in healthy adults. This was a single-center, open-label, single-dose, randomized, two-way crossover study with a 6-day washout period between doses, in healthy male and female subjects (N= 26), aged 21-40 years. Plasma concentration-time data for methylphenidate were used to calculate the pharmacokinetic parameters for each treatment. The pharmacokinetic profile for Metadate CD exhibited biphasic release characteristics with a sharp initial slope and a second rising portion. For Cmax (maximum observed concentration), AUC(0-infinity) (area under the plasma concentration curve from time 0 to infinity) and AUC(0-infinity) (area under the plasma concentration curve from time 0 to the last measurable time point), the geometric least squares mean ratios and 90% confidence intervals were within the 80% to 125% confidence interval for bioequivalence. Adverse events were similar to those reported for methylphenidate. The bioavailability of methylphenidate was not altered when Metadate CD capsules were administered by sprinkling their contents onto a small amount of applesauce.
Apparatus and method for closed-loop control of reactor power in minimum time
Bernard, Jr., John A.
1988-11-01
Closed-loop control law for altering the power level of nuclear reactors in a safe manner and without overshoot and in minimum time. Apparatus is provided for moving a fast-acting control element such as a control rod or a control drum for altering the nuclear reactor power level. A computer computes at short time intervals either the function: .rho.=(.beta.-.rho.).omega.-.lambda..sub.e '.rho.-.SIGMA..beta..sub.i (.lambda..sub.i -.lambda..sub.e ')+l* .omega.+l* [.omega..sup.2 +.lambda..sub.e '.omega.] or the function: .rho.=(.beta.-.rho.).omega.-.lambda..sub.e .rho.-(.lambda..sub.e /.lambda..sub.e)(.beta.-.rho.)+l* .omega.+l* [.omega..sup.2 +.lambda..sub.e .omega.-(.lambda..sub.e /.lambda..sub.e).omega.] These functions each specify the rate of change of reactivity that is necessary to achieve a specified rate of change of reactor power. The direction and speed of motion of the control element is altered so as to provide the rate of reactivity change calculated using either or both of these functions thereby resulting in the attainment of a new power level without overshoot and in minimum time. These functions are computed at intervals of approximately 0.01-1.0 seconds depending on the specific application.
Paci, Eugenio; Miccinesi, Guido; Puliti, Donella; Baldazzi, Paola; De Lisi, Vincenzo; Falcini, Fabio; Cirilli, Claudia; Ferretti, Stefano; Mangone, Lucia; Finarelli, Alba Carola; Rosso, Stefano; Segnan, Nereo; Stracci, Fabrizio; Traina, Adele; Tumino, Rosario; Zorzi, Manuel
2006-01-01
Introduction Excess of incidence rates is the expected consequence of service screening. The aim of this paper is to estimate the quota attributable to overdiagnosis in the breast cancer screening programmes in Northern and Central Italy. Methods All patients with breast cancer diagnosed between 50 and 74 years who were resident in screening areas in the six years before and five years after the start of the screening programme were included. We calculated a corrected-for-lead-time number of observed cases for each calendar year. The number of observed incident cases was reduced by the number of screen-detected cases in that year and incremented by the estimated number of screen-detected cases that would have arisen clinically in that year. Results In total we included 13,519 and 13,999 breast cancer cases diagnosed in the pre-screening and screening years, respectively. In total, the excess ratio of observed to predicted in situ and invasive cases was 36.2%. After correction for lead time the excess ratio was 4.6% (95% confidence interval 2 to 7%) and for invasive cases only it was 3.2% (95% confidence interval 1 to 6%). Conclusion The remaining excess of cancers after individual correction for lead time was lower than 5%. PMID:17147789
Step scaling and the Yang-Mills gradient flow
NASA Astrophysics Data System (ADS)
Lüscher, Martin
2014-06-01
The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.
NASA Astrophysics Data System (ADS)
Lundberg, J.; Conrad, J.; Rolke, W.; Lopez, A.
2010-03-01
A C++ class was written for the calculation of frequentist confidence intervals using the profile likelihood method. Seven combinations of Binomial, Gaussian, Poissonian and Binomial uncertainties are implemented. The package provides routines for the calculation of upper and lower limits, sensitivity and related properties. It also supports hypothesis tests which take uncertainties into account. It can be used in compiled C++ code, in Python or interactively via the ROOT analysis framework. Program summaryProgram title: TRolke version 2.0 Catalogue identifier: AEFT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license No. of lines in distributed program, including test data, etc.: 3431 No. of bytes in distributed program, including test data, etc.: 21 789 Distribution format: tar.gz Programming language: ISO C++. Computer: Unix, GNU/Linux, Mac. Operating system: Linux 2.6 (Scientific Linux 4 and 5, Ubuntu 8.10), Darwin 9.0 (Mac-OS X 10.5.8). RAM:˜20 MB Classification: 14.13. External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with statistical or systematic uncertainties in signal efficiency or background. Solution method: Profile likelihood method, Analytical Running time:<10 seconds per extracted limit.
High-intensity interval training has positive effects on performance in ice hockey players.
Naimo, M A; de Souza, E O; Wilson, J M; Carpenter, A L; Gilchrist, P; Lowery, R P; Averbuch, B; White, T M; Joy, J
2015-01-01
In spite of the well-known benefits that have been shown, few studies have looked at the practical applications of high-intensity interval training (HIIT) on athletic performance. This study investigated the effects of a HIIT program compared to traditional continuous endurance exercise training. 24 hockey players were randomly assigned to either a continuous or high-intensity interval group during a 4-week training program. The interval group (IG) was involved in a periodized HIIT program. The continuous group (CG) performed moderate intensity cycling for 45-60 min at an intensity that was 65% of their calculated heart rate reserve. Body composition, muscle thickness, anaerobic power, and on-ice measures were assessed pre- and post-training. Muscle thickness was significantly greater in IG (p=0.01) when compared to CG. The IG had greater values for both ∆ peak power (p<0.003) and ∆ mean power (p<0.02). Additionally, IG demonstrated a faster ∆ sprint (p<0.02) and a trend (p=0.08) for faster ∆ endurance test time to completion for IG. These results indicate that hockey players may utilize short-term HIIT to elicit positive effects in muscle thickness, power and on-ice performance. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Cande, S. C.; Stock, J. M.
2010-12-01
Motion between East and West Antarctica in the Late Cretaceous and Cenozoic is derived by summing the plate circuit(s) linking East Antarctica to Australia to the Lord Howe Rise to the Pacific plate to West Antarctica (the Aus-Pac plate circuit). We discuss this motion in two parts: motion before and after 42 Ma. For the younger time interval, motion is directly constrained by magnetic anomalies in the Adare Basin, which opened by ultraslow seafloor spreading between 42 and 26 Ma (anomalies 18 to 9). The Adare Basin magnetic anomaly constraints can be combined with magnetic anomaly and fracture zone data from the SEIR (Aus-East Ant to the west of the Balleny FZ and Aus - West Ant to the east) to set up an Aus-East Ant - West Ant three-plate problem. The original solution of this three-plate configuration (Cande et al., 2000) only had data from a very short section of the Adare Basin and obtained an answer with very large uncertainties on the East-West Ant rotation. Better estimates of the East-West Ant rotation have been calculated by adding constraints based on seismically controlled estimates of extension in the Victoria Land Basin (Davey et al., 2006) and constraints from Damaske et al’s (2007) detailed aeromagnetic survey of the Adare Basin and adjacent Northern Basin (Granot et al., 2010). Currently, we are working on improving the accuracy of rotations for the post-42 Ma time interval by taking advantage of an unusual plate geometry that enables us to solve a five-boundary, four-plate configuration. Specifically, motion between the four plates (East Ant, West Ant, Aus and Pac) encompasses two related triple junction systems with five spreading ridge segments (Aus-East Ant, Aus-West Ant, Aus-Pac, Pac-West Ant and East Ant-West Ant) which can be combined and solved simultaneously. For the older, pre-42 Ma time interval, the only way to calculate motion between East and West Antarctica is via the long Aus-Pac plate circuit (although it is possible that magnetic anomalies formed by direct spreading between East and West Antarctica, akin to the Adare Basin anomalies, may exist in the poorly mapped Central Basin between the Hallett Ridge and the Iselin Bank). The weakest link in this time interval is the Aus - East Ant boundary; for the time interval from 84 to 42 Ma there are distinctly different results depending on how the tectonic constraints are prioritized (Royer and Rollett, 1997; Tikku and Cande, 1999; Whittaker et al., 2007). The disagreement over the pre-42 Ma motion between Australia and East Antarctica leads to large differences in the predicted motion in the Western Ross Sea and near Ellsworth Land. Another weak link in this circuit is the pattern of sea floor spreading in the Tasman Sea, which is difficult to unravel because of the complex history of motion between Australia, the Lord Howe Rise, and Tasmania (Gaina et al., 1999). Resolution of these issues is required before a well constrained history of East -West Antarctic motion back to the Late Cretaceous is obtained
Preliminary experience using dynamic MRI at 3.0 Tesla for evaluation of soft tissue tumors.
Park, Michael Yong; Jee, Won-Hee; Kim, Sun Ki; Lee, So-Yeon; Jung, Joon-Yong
2013-01-01
We aimed to evaluate the use of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) at 3.0 T for differentiating the benign from malignant soft tissue tumors. Also we aimed to assess whether the shorter length of DCE-MRI protocols are adequate, and to evaluate the effect of temporal resolution. Dynamic contrast-enhanced magnetic resonance imaging, at 3.0 T with a 1 second temporal resolution in 13 patients with pathologically confirmed soft tissue tumors, was analyzed. Visual assessment of time-signal curves, subtraction images, maximal relative enhancement at the first (maximal peak enhancement [Emax]/1) and second (Emax/2) minutes, Emax, steepest slope calculated by using various time intervals (5, 30, 60 seconds), and the start of dynamic enhancement were analyzed. The 13 tumors were comprised of seven benign and six malignant soft tissue neoplasms. Washout on time-signal curves was seen on three (50%) malignant tumors and one (14%) benign one. The most discriminating DCE-MRI parameter was the steepest slope calculated, by using at 5-second intervals, followed by Emax/1 and Emax/2. All of the steepest slope values occurred within 2 minutes of the dynamic study. Start of dynamic enhancement did not show a significant difference, but no malignant tumor rendered a value greater than 14 seconds. The steepest slope and early relative enhancement have the potential for differentiating benign from malignant soft tissue tumors. Short-length rather than long-length DCE-MRI protocol may be adequate for our purpose. The steepest slope parameters require a short temporal resolution, while maximal peak enhancement parameter may be more optimal for a longer temporal resolution.
Radiological Risk Assessment of Capstone Depleted Uranium Aerosols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, Fletcher; Roszell, Laurie E.; Daxon, Eric G.
2009-02-26
Assessment of the health risk from exposure to aerosols of depleted uranium (DU) is an important outcome of the Capstone aerosol studies that established exposure ranges to personnel in armored combat vehicles perforated by DU munitions. Although the radiation exposure from DU is low, there is concern that DU deposited in the body may increase cancer rates. Radiation doses to various organs of the body resulting from the inhalation of DU aerosols measured in the Capstone studies were calculated using International Commission on Radiological Protection (ICRP) models. Organs and tissues with the highest calculated committed equivalent 50-yr doses were lungmore » and extrathoracic tissues (nose and nasal passages, pharynx, larynx, mouth and thoracic lymph nodes). Doses to the bone surface and kidney were about 5 to 10% of the doses to the extrathoracic tissues. The methodologies of the ICRP International Steering Committee on Radiation Standards (ISCORS) were used for determining the whole body cancer risk. Organ-specific risks were estimated using ICRP and U.S. Environmental Protection Agency (EPA) methodologies. Risks for crewmembers and first responders were determined for selected scenarios based on the time interval of exposure and for vehicle and armor type. The lung was the organ with the highest cancer mortality risk, accounting for about 97% of the risks summed from all organs. The highest mean lifetime risk for lung cancer for the scenario with the longest exposure time interval (2 h) was 0.42%. This risk is low compared with the natural or background risk of 7.35%. These risks can be significantly reduced by using an existing ventilation system (if operable) and by reducing personnel time in the vehicle immediately after perforation.« less
Giroud, Marie; Delpont, Benoit; Daubail, Benoit; Blanc, Christelle; Durier, Jérôme; Giroud, Maurice; Béjot, Yannick
2017-04-01
We evaluated temporal trends in stroke incidence between men and women to determine whether changes in the distribution of vascular risk factors have influenced sex differences in stroke epidemiology. Patients with first-ever stroke including ischemic stroke, spontaneous intracerebral hemorrhage, subarachnoid hemorrhage, and undetermined stroke between 1987 and 2012 were identified through the population-based registry of Dijon, France. Incidence rates were calculated for age groups, sex, and stroke subtypes. Sex differences and temporal trends (according to 5-year time periods) were evaluated by calculating incidence rate ratios (IRRs) with Poisson regression. Four thousand six hundred and fourteen patients with a first-ever stroke (53.1% women) were recorded. Incidence was lower in women than in men (112 versus 166 per 100 000/y; IRR, 0.68; P <0.001), especially in age group 45 to 84 years, and for both ischemic stroke and intracerebral hemorrhage. From 1987 to 2012, the lower incidence of overall stroke in women was stable (IRR ranging between 0.63 and 0.72 according to study periods). When considering stroke subtype, a slight increase in the incidence of ischemic stroke was observed in both men (IRR, 1.011; 95% confidence interval, 1.005-1.016; P =0.001) and women (IRR, 1.013; 95% confidence interval, 1.007-1.018; P =0.001). The sex gap in incidence remained unchanged in ischemic stroke and intracerebral hemorrhage. Conversely, the lower subarachnoid hemorrhage incidence in women vanished with time because of an increasing incidence. The sex gap in stroke incidence did not change with time except for subarachnoid hemorrhage. Despite lower rates, more women than men experience an incident stroke each year because of a longer life expectancy. © 2017 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Castro, J.; Martin-Rojas, I.; Medina-Cascales, I.; García-Tortosa, F. J.; Alfaro, P.; Insua-Arévalo, J. M.
2018-06-01
This paper on the Baza Fault provides the first palaeoseismic data from trenches in the central sector of the Betic Cordillera (S Spain), one of the most tectonically active areas of the Iberian Peninsula. With the palaeoseismological data we constructed time-stratigraphic OxCal models that yield probability density functions (PDFs) of individual palaeoseismic event timing. We analysed PDF overlap to quantitatively correlate the walls and site events into a single earthquake chronology. We assembled a surface-rupturing history of the Baza Fault for the last ca. 45,000 years. We postulated six alternative surface rupturing histories including 8-9 fault-wide earthquakes. We calculated fault-wide earthquake recurrence intervals using Monte Carlo. This analysis yielded a 4750-5150 yr recurrence interval. Finally, compared our results with the results from empirical relationships. Our results will provide a basis for future analyses of more of other active normal faults in this region. Moreover, our results will be essential for improving earthquake-probability assessments in Spain, where palaeoseismic data are scarce.
Lee, Mi Jung; Park, Jung Tak; Park, Kyoung Sook; Kwon, Young Eun; Oh, Hyung Jung; Yoo, Tae-Hyun; Kim, Yong-Lim; Kim, Yon Su; Yang, Chul Woo; Kim, Nam-Ho; Kang, Shin-Wook; Han, Seung Hyeok
2017-03-07
Residual kidney function can be assessed by simply measuring urine volume, calculating GFR using 24-hour urine collection, or estimating GFR using the proposed equation (eGFR). We aimed to investigate the relative prognostic value of these residual kidney function parameters in patients on dialysis. Using the database from a nationwide prospective cohort study, we compared differential implications of the residual kidney function indices in 1946 patients on dialysis at 36 dialysis centers in Korea between August 1, 2008 and December 31, 2014. Residual GFR calculated using 24-hour urine collection was determined by an average of renal urea and creatinine clearance on the basis of 24-hour urine collection. eGFR-urea, creatinine and eGFR β 2 -microglobulin were calculated from the equations using serum urea and creatinine and β 2 -microglobulin, respectively. The primary outcome was all-cause death. During a mean follow-up of 42 months, 385 (19.8%) patients died. In multivariable Cox analyses, residual urine volume (hazard ratio, 0.96 per 0.1-L/d higher volume; 95% confidence interval, 0.94 to 0.98) and GFR calculated using 24-hour urine collection (hazard ratio, 0.98; 95% confidence interval, 0.95 to 0.99) were independently associated with all-cause mortality. In 1640 patients who had eGFR β 2 -microglobulin data, eGFR β 2 -microglobulin (hazard ratio, 0.98; 95% confidence interval, 0.96 to 0.99) was also significantly associated with all-cause mortality as well as residual urine volume (hazard ratio, 0.96 per 0.1-L/d higher volume; 95% confidence interval, 0.94 to 0.98) and GFR calculated using 24-hour urine collection (hazard ratio, 0.97; 95% confidence interval, 0.95 to 0.99). When each residual kidney function index was added to the base model, only urine volume improved the predictability for all-cause mortality (net reclassification index =0.11, P =0.01; integrated discrimination improvement =0.01, P =0.01). Higher residual urine volume was significantly associated with a lower risk of death and exhibited a stronger association with mortality than GFR calculated using 24-hour urine collection and eGFR-urea, creatinine. These results suggest that determining residual urine volume may be beneficial to predict patient survival in patients on dialysis. Copyright © 2017 by the American Society of Nephrology.
NASA Astrophysics Data System (ADS)
Wright, Robyn; Thornberg, Steven M.
SEDIDAT is a series of compiled IBM-BASIC (version 2.0) programs that direct the collection, statistical calculation, and graphic presentation of particle settling velocity and equivalent spherical diameter for samples analyzed using the settling tube technique. The programs follow a menu-driven format that is understood easily by students and scientists with little previous computer experience. Settling velocity is measured directly (cm,sec) and also converted into Chi units. Equivalent spherical diameter (reported in Phi units) is calculated using a modified Gibbs equation for different particle densities. Input parameters, such as water temperature, settling distance, particle density, run time, and Phi;Chi interval are changed easily at operator discretion. Optional output to a dot-matrix printer includes a summary of moment and graphic statistical parameters, a tabulation of individual and cumulative weight percents, a listing of major distribution modes, and cumulative and histogram plots of a raw time, settling velocity. Chi and Phi data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, N.M.; Johnson, J.A.; Larsen, E.D.
1992-01-01
In-process ultrasonic sensing of welding allows detection of weld defects in real time. A noncontacting ultrasonic system is being developed to operate in a production environment. The principal components are a pulsed laser for ultrasound generation and an electromagnetic acoustic transducer (EMAT) for ultrasound reception. A PC-based data acquisition system determines the quality of the weld on a pass-by-pass basis. The laser/EMAT system interrogates the area in the weld volume where defects are most likely to occur. This area of interest is identified by computer calculations on a pass-by-pass basis using weld planning information provided by the off-line programmer. Themore » absence of a signal above the threshold level in the computer-calculated time interval indicates a disruption of the sound path by a defect. The ultrasonic sensor system then provides an input signal to the weld controller about the defect condition. 8 refs.« less
Sartori, Elena; Ruzzi, Marco; Lawler, Ronald G; Turro, Nicholas J
2008-09-24
The kinetics of para-ortho conversion and nuclear spin relaxation of H 2 in chloroform- d 1 were investigated in the presence of nitroxides as paramagnetic catalysts. The back conversion from para-hydrogen ( p-H 2) to ortho-hydrogen ( o-H 2) was followed by NMR by recording the increase in the intensity of the signal of o-H 2 at regular intervals of time. The nitroxides proved to be hundreds of times more effective at inducing relaxation among the spin levels of o-H 2 than they are in bringing about transitions between p-H 2 and the levels of o-H 2. The value of the encounter distance d between H 2 and the paramagnetic molecule, calculated from the experimental bimolecular conversion rate constant k 0, using the Wigner theory of para-ortho conversion, agrees perfectly with that calculated from the experimental relaxivity R 1 using the force free diffusion theory of spin-lattice relaxation.
Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.
Murai, Yuki; Yotsumoto, Yuko
2016-01-01
When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.
Rukuni, Ruramayi; Bhattacharya, Sohinee; Murphy, Michael F; Roberts, David; Stanworth, Simon J; Knight, Marian
2016-05-01
Antenatal anemia is a major public health problem in the UK, yet there is limited high quality evidence for associated poor clinical outcomes. The objectives of this study were to estimate the incidence and clinical outcomes of antenatal anemia in a Scottish population. A retrospective cohort study of 80 422 singleton pregnancies was conducted using data from the Aberdeen Maternal and Neonatal Databank between 1995 and 2012. Antenatal anemia was defined as haemoglobin ≤ 10 g/dl during pregnancy. Incidence was calculated with 95% confidence intervals and compared over time using a chi-squared test for trend. Multivariable logistic regression was used to adjust for confounding variables. Results are presented as adjusted odds ratios with 95% confidence interval. The overall incidence of antenatal anemia was 9.3 cases/100 singleton pregnancies (95% confidence interval 9.1-9.5), decreasing from 16.9/100 to 4.1/100 singleton pregnancies between 1995 and 2012 (p < 0.001). Maternal anemia was associated with antepartum hemorrhage (adjusted odds ratio 1.26, 95% confidence interval 1.17-1.36), postpartum infection (adjusted odds ratio 1.89, 95% confidence interval 1.39-2.57), transfusion (adjusted odds ratio 1.87, 95% confidence interval 1.65-2.13) and stillbirth (adjusted odds ratio 1.42, 95% confidence interval 1.04-1.94), reduced odds of postpartum hemorrhage (adjusted odds ratio 0.92, 95% confidence interval 0.86-0.98) and low birthweight (adjusted odds ratio 0.77, 95% confidence interval 0.69-0.86). No other outcomes were statistically significant. This study shows the incidence of antenatal anemia is decreasing steadily within this Scottish population. However, given that anemia is a readily correctable risk factor for major causes of morbidity and mortality in the UK, further work is required to investigate appropriate preventive measures. © 2016 Nordic Federation of Societies of Obstetrics and Gynecology.
Responses of heart rate and blood pressure to KC-135 hyper-gravity
NASA Technical Reports Server (NTRS)
Satake, Hirotaka; Matsunami, Ken'ichi; Reschke, Millard F.
1992-01-01
Many investigators have clarified the effects of hyper gravitational-inertial forces (G) upon the cardiovascular system, using the centrifugal apparatus with short rotating radius. We investigated the cardiovascular responses to KC-135 hyper-G flight with negligibly small angular velocity. Six normal, healthy subjects 29 to 40 years old (5 males and 1 female) took part in this experiment. Hyper gravitational-inertial force was generated by the KC-135 hyper-G flight, flown in a spiral path with a very long radius of 1.5 miles. Hyper-G was sustained for 3 minutes with 1.8 +Gz in each session and was repeatedly exposed to very subject sitting on a chair 5 times. The preliminary results of blood pressure and R-R interval are discussed. An exposure of 1.8 +Gz stress resulted in a remarkable increase of systolic and diastolic blood pressure, while the pulse pressure did not change and remained equal to the control level regardless of an exposure of hyper-G. These results in blood pressure indicate an increase of resistance in the peripheral vessels, when an exposure of hyper-G was applied. The R-R interval was calculated from ECG. R-R interval in all subjects was changed but not systematically, and R-R interval became obviously shorter during the hyper-G period than during the 1 +Gz control period although R-R interval varied widely in some cases. The coefficient of variation of R-R interval was estimated to determine the autonomic nerve activity, but no significant change was detectable.
Estimating fluvial wood discharge from timelapse photography with varying sampling intervals
NASA Astrophysics Data System (ADS)
Anderson, N. K.
2013-12-01
There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.
High resolution data acquisition
Thornton, G.W.; Fuller, K.R.
1993-04-06
A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock, pulse train, and analog circuitry for generating a triangular wave synchronously with the pulse train (as seen in diagram on patent). The triangular wave has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter counts the clock pulse train during the interval to form a gross event interval time. A computer then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.
High resolution data acquisition
Thornton, Glenn W.; Fuller, Kenneth R.
1993-01-01
A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.
12 CFR 1.4 - Calculation of limits.
Code of Federal Regulations, 2010 CFR
2010-01-01
... separately to the Type III and Type V securities held by a bank. (e) Limit on investment company holdings—(1... investment limits at that interval until further notice. (d) Calculation of Type III and Type V securities holdings—(1) General. In calculating the amount of its investment in Type III or Type V securities issued...
McNicholas, Colleen; Swor, Erin; Wan, Leping; Peipert, Jeffrey F
2017-06-01
The subdermal contraceptive implant and the 52-mg levonorgestrel intrauterine device are currently Food and Drug Administration approved for 3 and 5 years of use, respectively. Limited available data suggested both of these methods are effective beyond that time. Demonstration of prolonged effectiveness will improve the cost-effectiveness of the device, and potentially patient continuation and satisfaction. We sought to evaluate the effectiveness of the contraceptive implant and the 52-mg hormonal intrauterine device in women using the method for 2 years beyond the current Food and Drug Administration-approved duration. We initiated this ongoing prospective cohort study in January 2012. We are enrolling women using the contraceptive implant or 52-mg levonorgestrel intrauterine device for a minimum of 3 and 5 years, respectively (started intrauterine device in ≥2007 or implant in ≥2009). Demographic and reproductive health histories, as well as objective body mass index, were collected. Implant users were offered periodic venipuncture for analysis of serum etonogestrel levels. The primary outcome, unintended pregnancy rate, was calculated per 100 woman-years. We analyzed baseline demographic characteristics using χ 2 test and Fisher exact test, and compared serum etonogestrel levels stratified by body mass index using the Kruskal-Wallis test. Implant users (n = 291) have contributed 444.0 woman-years of follow-up. There have been no documented pregnancies in implant users during the 2 years of postexpiration follow-up. Calculated failure rates in the fourth and fifth years for the implant are calculated as 0 (1-sided 97.5% confidence interval, 0-1.48) per 100 woman-years at 4 years and 0 (1-sided 97.5% confidence interval, 0-2.65) per 100 woman-years at 5 years. Among 496 levonorgestrel intrauterine device users, 696.9 woman-years of follow-up have been completed. Two pregnancies have been reported. The failure rate in the sixth year of use of the levonorgestrel intrauterine device is calculated as 0.25 (95% confidence interval, 0.04-1.42) per 100 woman-years; failure rate during the seventh year is 0.43 (95% confidence interval, 0.08-2.39) per 100 woman-years. Among implant users with serum etonogestrel results, the median etonogestrel level was 207.7 pg/mL (range 63.8-802.6 pg/mL) at the time of method expiration, 166.1 pg/mL (range 67.9 25.0-470.5 pg/mL) at the end of the fourth year, and 153.0 pg/mL (range 72.1-538.8 pg/mL) at the end of the fifth year. Median etonogestrel levels were compared by body mass index at each time point and a statistical difference was noted at the end of 4 years of use with overweight women having the highest serum etonogestrel (195.9; range 25.0-450.5 pg/mL) when compared to normal (178.9; range 87.0-463.7 pg/mL) and obese (137.9; range 66.0-470.5 pg/mL) women (P = .04). This study indicates that the contraceptive implant and 52-mg hormonal intrauterine device continue to be highly effective for at least 2 additional years of use. Serum etonogestrel evaluation demonstrates median levels remain above the ovulation threshold of 90 pg/mL for women in all body mass index classes. Copyright © 2017 Elsevier Inc. All rights reserved.
Cucheb: A GPU implementation of the filtered Lanczos procedure
NASA Astrophysics Data System (ADS)
Aurentz, Jared L.; Kalantzis, Vassilis; Saad, Yousef
2017-11-01
This paper describes the software package Cucheb, a GPU implementation of the filtered Lanczos procedure for the solution of large sparse symmetric eigenvalue problems. The filtered Lanczos procedure uses a carefully chosen polynomial spectral transformation to accelerate convergence of the Lanczos method when computing eigenvalues within a desired interval. This method has proven particularly effective for eigenvalue problems that arise in electronic structure calculations and density functional theory. We compare our implementation against an equivalent CPU implementation and show that using the GPU can reduce the computation time by more than a factor of 10. Program Summary Program title: Cucheb Program Files doi:http://dx.doi.org/10.17632/rjr9tzchmh.1 Licensing provisions: MIT Programming language: CUDA C/C++ Nature of problem: Electronic structure calculations require the computation of all eigenvalue-eigenvector pairs of a symmetric matrix that lie inside a user-defined real interval. Solution method: To compute all the eigenvalues within a given interval a polynomial spectral transformation is constructed that maps the desired eigenvalues of the original matrix to the exterior of the spectrum of the transformed matrix. The Lanczos method is then used to compute the desired eigenvectors of the transformed matrix, which are then used to recover the desired eigenvalues of the original matrix. The bulk of the operations are executed in parallel using a graphics processing unit (GPU). Runtime: Variable, depending on the number of eigenvalues sought and the size and sparsity of the matrix. Additional comments: Cucheb is compatible with CUDA Toolkit v7.0 or greater.
New advances in the partial-reflection-drifts experiment using microprocessors
NASA Technical Reports Server (NTRS)
Ruggerio, R. L.; Bowhill, S. A.
1982-01-01
Improvements to the partial reflection drifts experiment are completed. The results of the improvements include real time processing and simultaneous measurements of the D region with coherent scatter. Preliminary results indicate a positive correlation between drift velocities calculated by both methods during a two day interval. The possibility now exists for extended observations between partial reflection and coherent scatter. In addition, preliminary measurements could be performed between partial reflection and meteor radar to complete a comparison of methods used to determine velocities in the D region.
Dynamic baseline detection method for power data network service
NASA Astrophysics Data System (ADS)
Chen, Wei
2017-08-01
This paper proposes a dynamic baseline Traffic detection Method which is based on the historical traffic data for the Power data network. The method uses Cisco's NetFlow acquisition tool to collect the original historical traffic data from network element at fixed intervals. This method uses three dimensions information including the communication port, time, traffic (number of bytes or number of packets) t. By filtering, removing the deviation value, calculating the dynamic baseline value, comparing the actual value with the baseline value, the method can detect whether the current network traffic is abnormal.
IBM system/360 assembly language interval arithmetic software
NASA Technical Reports Server (NTRS)
Phillips, E. J.
1972-01-01
Computer software designed to perform interval arithmetic is described. An interval is defined as the set of all real numbers between two given numbers including or excluding one or both endpoints. Interval arithmetic consists of the various elementary arithmetic operations defined on the set of all intervals, such as interval addition, subtraction, union, etc. One of the main applications of interval arithmetic is in the area of error analysis of computer calculations. For example, it has been used sucessfully to compute bounds on sounding errors in the solution of linear algebraic systems, error bounds in numerical solutions of ordinary differential equations, as well as integral equations and boundary value problems. The described software enables users to implement algorithms of the type described in references efficiently on the IBM 360 system.
A cross-country Exchange Market Pressure (EMP) dataset.
Desai, Mohit; Patnaik, Ila; Felman, Joshua; Shah, Ajay
2017-06-01
The data presented in this article are related to the research article titled - "An exchange market pressure measure for cross country analysis" (Patnaik et al. [1]). In this article, we present the dataset for Exchange Market Pressure values (EMP) for 139 countries along with their conversion factors, ρ (rho). Exchange Market Pressure, expressed in percentage change in exchange rate, measures the change in exchange rate that would have taken place had the central bank not intervened. The conversion factor ρ can interpreted as the change in exchange rate associated with $1 billion of intervention. Estimates of conversion factor ρ allow us to calculate a monthly time series of EMP for 139 countries. Additionally, the dataset contains the 68% confidence interval (high and low values) for the point estimates of ρ 's. Using the standard errors of estimates of ρ 's, we obtain one sigma intervals around mean estimates of EMP values. These values are also reported in the dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatt, M.H.; Snow, B.J.; Martin, W.R.
1991-06-01
The authors performed sequential positron emission tomography scans with 6-(18F)fluoro-L-dopa in 9 patients with idiopathic parkinsonism and 7 age-matched normal control subjects to compare changes in the nigrostriatal dopaminergic pathway over time. The mean interval between the scans was 3.3 years for the group with idiopathic parkinsonism and 3.9 years for the control subjects. The scans were analyzed by calculating the ratio of striatal to background radioactivity. Both groups showed statistically significant reductions of striatal uptake over the interval. The rate of decrease was almost identical in each group (p = 0.6). They infer that the usual rate of lossmore » of integrity of the dopaminergic nigrostriatal pathway in patients with idiopathic parkinsonism is slow and the rate of change between the two groups was comparable.« less
Measuring the EMS patient access time interval and the impact of responding to high-rise buildings.
Morrison, Laurie J; Angelini, Mark P; Vermeulen, Marian J; Schwartz, Brian
2005-01-01
To measure the patient access time interval and characterize its contribution to the total emergency medical services (EMS) response time interval; to compare the patient access time intervals for patients located three or more floors above ground with those less than three floors above or below ground, and specifically in the apartment subgroup; and to identify barriers that significantly impede EMS access to patients in high-rise apartments. An observational study of all patients treated by an emergency medical technician paramedics (EMT-P) crew was conducted using a trained independent observer to collect time intervals and identify potential barriers to access. Of 118 observed calls, 25 (21%) originated from patients three or more floors above ground. The overall median and 90th percentile (95% confidence interval) patient access time intervals were 1.61 (1.27, 1.91) and 3.47 (3.08, 4.05) minutes, respectively. The median interval was 2.73 (2.22, 3.03) minutes among calls from patients located three or more stories above ground compared with 1.25 (1.07, 1.55) minutes among those at lower levels. The patient access time interval represented 23.5% of the total EMS response time interval among calls originating less than three floors above or below ground and 32.2% of those located three or more stories above ground. The most frequently encountered barriers to access included security code entry requirements, lack of directional signs, and inability to fit the stretcher into the elevator. The patient access time interval is significantly long and represents a substantial component of the total EMS response time interval, especially among ambulance calls originating three or more floors above ground. A number of barriers appear to contribute to delayed paramedic access.
Likelihood-based confidence intervals for estimating floods with given return periods
NASA Astrophysics Data System (ADS)
Martins, Eduardo Sávio P. R.; Clarke, Robin T.
1993-06-01
This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.
Nocturnal Oviposition Behavior of Forensically Important Diptera in Central England.
Barnes, Kate M; Grace, Karon A; Bulling, Mark T
2015-11-01
Timing of oviposition on a corpse is a key factor in entomologically based minimum postmortem interval (mPMI) calculations. However, there is considerable variation in nocturnal oviposition behavior of blow flies reported in the research literature. This study investigated nocturnal oviposition in central England for the first time, over 25 trials from 2011 to 2013. Liver-baited traps were placed in an urban location during control (diurnal), and nocturnal periods and environmental conditions were recorded during each 5-h trial. No nocturnal activity or oviposition was observed during the course of the study indicating that nocturnal oviposition is highly unlikely in central England. © 2015 American Academy of Forensic Sciences.
[Forensic entomology exemplified by a homicide. A combined stain and postmortem time analysis].
Benecke, M; Seifert, B
1999-01-01
The combined analysis of both ant and blow fly evidence recovered from a corpse, and from the boot of a suspect, suggested that an assumed scenario in a high profile murder case was likely to be true. The ants (Lasius fuliginous) were used as classical crime scene stains that linked the suspect to the scene. Blow fly maggots (Calliphora spec.) helped to determine the post mortem interval (PMI) with the calculated PMI overlapping with the assumed time of the killing. In the trial, the results of the medico-legal analysis of the insects was understood to be crucial scientific evidence, and the suspect was sentenced to 8 years in prison.
Measuring the RC time constant with Arduino
NASA Astrophysics Data System (ADS)
Pereira, N. S. A.
2016-11-01
In this work we use the Arduino UNO R3 open source hardware platform to assemble an experimental apparatus for the measurement of the time constant of an RC circuit. With adequate programming, the Arduino is used as a signal generator, a data acquisition system and a basic signal visualisation tool. Theoretical calculations are compared with direct observations from an analogue oscilloscope. Data processing and curve fitting is performed on a spreadsheet. The results obtained for the six RC test circuits are within the expected interval of values defined by the tolerance of the components. The hardware and software prove to be adequate to the proposed measurements and therefore adaptable to a laboratorial teaching and learning context.
Modulation of human time processing by subthalamic deep brain stimulation.
Wojtecki, Lars; Elben, Saskia; Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons
2011-01-01
Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥ 130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥ 130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds.
Modulation of Human Time Processing by Subthalamic Deep Brain Stimulation
Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons
2011-01-01
Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds. PMID:21931767
Sood, Anshuman; Hakim, David N; Hakim, Nadey S
2016-04-01
The prevalence of obesity is increasing rapidly and globally, yet systemic reviews on this topic are scarce. Our meta-analysis and systemic review aimed to assess how obesity affects 5 postoperative outcomes: biopsy-proven acute rejection, patient death, allograft loss, type 2 diabetes mellitus after transplant, and delayed graft function. We evaluated peer-reviewed literature from 22 medical databases. Studies were included if they were conducted in accordance with the Meta-analysis of Observational Studies in Epidemiology criteria, only examined postoperative outcomes in adult patients, only examined the relation between recipient obesity at time of transplant and our 5 postoperative outcomes, and had a minimum score of > 5 stars on the Newcastle-Ottawa scale for nonrandomized studies. Reliable conclusions were ensured by having our studies examined against 2 internationally known scoring systems. Obesity was defined in accordance with the World Health Organization as having a body mass index of > 30 kg/m(2). All obese recipients were compared versus "healthy" recipients (body mass index of 18.5-24.9 kg/m(2)). Hazard ratios were calculated for biopsy-proven acute rejection, patient death, allograft loss, and type 2 diabetes mellitus after transplant. An odds ratio was calculated for delayed graft function. We assessed 21 retrospective observational studies in our meta-analysis (N = 241 381 patients). In obese transplant recipients, hazard ratios were 1.51 (95% confidence interval, 1.24-1.78) for presence of biopsy-proven acute rejection, 1.19 (95% confidence interval, 1.10-1.31) for patient death, 1.54 (95% confidence interval, 1.38-1.68) for allograft loss, and 1.01 (95% confidence interval, 0.98-1.07) for development of type 2 diabetes mellitus. The odds ratio for delayed graft function was 1.81 (95% confidence interval, 1.51-2.13). Our meta-analysis clearly demonstrated greater risks for obese renal transplant recipients and poorer postoperative outcomes with obesity. We confidently recommend renal transplant candidates seek medically supervised weight loss before transplant.
New precession expressions, valid for long time intervals
NASA Astrophysics Data System (ADS)
Vondrák, J.; Capitaine, N.; Wallace, P.
2011-10-01
Context. The present IAU model of precession, like its predecessors, is given as a set of polynomial approximations of various precession parameters intended for high-accuracy applications over a limited time span. Earlier comparisons with numerical integrations have shown that this model is valid only for a few centuries around the basic epoch, J2000.0, while for more distant epochs it rapidly diverges from the numerical solution. In our preceding studies we also obtained preliminary developments for the precessional contribution to the motion of the equator: coordinates X,Y of the precessing pole and precession parameters ψA,ωA, suitable for use over long time intervals. Aims: The goal of the present paper is to obtain upgraded developments for various sets of precession angles that would fit modern observations near J2000.0 and at the same time fit numerical integration of the motions of solar system bodies on scales of several thousand centuries. Methods: We used the IAU 2006 solutions to represent the precession of the ecliptic and of the equator close to J2000.0 and, for more distant epochs, a numerical integration using the Mercury 6 package and solutions by Laskar et al. (1993, A&A, 270, 522) with upgraded initial conditions and constants to represent the ecliptic, and general precession and obliquity, respectively. From them, different precession parameters were calculated in the interval ± 200 millennia from J2000.0, and analytical expressions are found that provide a good fit for the whole interval. Results: Series for the various precessional parameters, comprising a cubic polynomial plus from 8 to 14 periodic terms, are derived that allow precession to be computed with an accuracy comparable to IAU 2006 around the central epoch J2000.0, a few arcseconds throughout the historical period, and a few tenths of a degree at the ends of the ± 200 millennia time span. Computer algorithms are provided that compute the ecliptic and mean equator poles and the precession matrix. The Appendix containing the computer code is available in electronic form at http://www.aanda.org
NASA Technical Reports Server (NTRS)
Brabbs, T. A.; Robertson, T. F.
1986-01-01
Ignition delay data were recorded for three methane-oxygen-argon mixtures (phi = 0.5, 1.0, 2.0) for the temperature range 1500 to 1920 K. Quiet pressure trances enabled us to obtain delay times for the start of the experimental pressure rise. These times were in good agreement with those obtained from the flame band emission at 3700 A. The data correlated well with the oxygen and methane dependence of Lifshitz, but showed a much stronger temperature dependence (phi = 0.5 delta E = 51.9, phi = 1.0 delta = 58.8, phi = 2.0 delta E = 58.7 Kcal). The effect of probe location on the delay time measurement was studied. It appears that the probe located 83 mm from the reflecting surface measured delay times which may not be related to the initial temperature and pressure. It was estimated that for a probe located 7 mm from the reflecting surface, the measured delay time would be about 10 microseconds too short, and it was suggested that delay times less than 100 microsecond should not be used. The ignition period was defined as the time interval between start of the experimental pressure rise and 50 percent of the ignition pressure. This time interval was measured for three gas mixtures and found to be similar (40 to 60 micro sec) for phi = 1.0 and 0.5 but much longer (100 to 120) microsecond for phi = 2.0. It was suggested that the ignition period would be very useful to the kinetic modeler in judging the agreement between experimental and calculated delay times.
NASA Astrophysics Data System (ADS)
Lokoshchenko, A.; Teraud, W.
2018-04-01
The work describes an experimental research of creep of cylindrical tensile test specimens made of aluminum alloy D16T at a constant temperature of 400°C. The issue to be examined was the necking at different values of initial tensile stresses. The use of a developed noncontacting measuring system allowed us to see variations in the specimen shape and to estimate the true stress in various times. Based on the obtained experimental data, several criteria were proposed for describing the point of time at which the necking occurs (necking point). Calculations were carried out at various values of the parameters in these criteria. The relative interval of deformation time in which the test specimen is uniformly stretched was also determined.
The Prediction of the Motion of Atens, Apollos and Amors over Long Intervals of Time
NASA Astrophysics Data System (ADS)
Wlodarczyk, I.
2002-01-01
Equations of motion of 930 Atens, Apollos and Amors (AAA) were integrated 300,000 years forward using RA15 Everhart method (Everhart, 1974). The Osterwinter model of Solar System was used (Osterwinter and Cohen, 1972). The differences in mean anomaly between unchanged and changed orbits were calculated. The changed orbits were constructed by adding or subtracting to the starting orbital elements one after the other errors of determination of orbital elements. When the differences in mean anomaly were greater than 360 deg. then computations were stopped. In almost all cases after about 1000 years in forwards or backwards integrations differences in mean anomaly between neighbors orbits growth rapidly. It denotes that it is impossible to predict behavior of asteroids outside this time. This time I have called time of stability.
On reliable control system designs. Ph.D. Thesis; [actuators
NASA Technical Reports Server (NTRS)
Birdwell, J. D.
1978-01-01
A mathematical model for use in the design of reliable multivariable control systems is discussed with special emphasis on actuator failures and necessary actuator redundancy levels. The model consists of a linear time invariant discrete time dynamical system. Configuration changes in the system dynamics are governed by a Markov chain that includes transition probabilities from one configuration state to another. The performance index is a standard quadratic cost functional, over an infinite time interval. The actual system configuration can be deduced with a one step delay. The calculation of the optimal control law requires the solution of a set of highly coupled Riccati-like matrix difference equations. Results can be used for off-line studies relating the open loop dynamics, required performance, actuator mean time to failure, and functional or identical actuator redundancy, with and without feedback gain reconfiguration strategies.
Department of Defense Precise Time and Time Interval program improvement plan
NASA Technical Reports Server (NTRS)
Bowser, J. R.
1981-01-01
The United States Naval Observatory is responsible for ensuring uniformity in precise time and time interval operations including measurements, the establishment of overall DOD requirements for time and time interval, and the accomplishment of objectives requiring precise time and time interval with minimum cost. An overview of the objectives, the approach to the problem, the schedule, and a status report, including significant findings relative to organizational relationships, current directives, principal PTTI users, and future requirements as currently identified by the users are presented.
On the Time Course of Vocal Emotion Recognition
Pell, Marc D.; Kotz, Sonja A.
2011-01-01
How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing. PMID:22087275
Walker, J J; Brewster, D H; Colhoun, H M; Fischbacher, C M; Lindsay, R S; Wild, S H
2013-07-01
The objective of this study was to use Scottish national data to assess the influence of type 2 diabetes on (1) survival (overall and cause-specific) in multiple time intervals after diagnosis of colorectal cancer and (2) cause of death. Data from the Scottish Cancer Registry were linked to data from a population-based national diabetes register. All people in Scotland diagnosed with non-metastatic cancer of the colon or rectum in 2000-2007 were included. The effect of pre-existing type 2 diabetes on survival over four discrete time intervals (<1, 1-2, 3-5 and >5 years) after cancer diagnosis was assessed by Cox regression. Cumulative incidence functions were calculated representing the respective probabilities of death from the competing causes of colorectal cancer, cardiovascular disease, other cancers and any other cause. Data were available for 19,505 people with colon or rectal cancer (1,957 with pre-existing diabetes). Cause-specific mortality analyses identified a stronger association between diabetes and cardiovascular disease mortality than that between diabetes and cancer mortality. Beyond 5 years after colon cancer diagnosis, diabetes was associated with a detrimental effect on all-cause mortality after adjustment for age, socioeconomic status and cancer stage (HR [95% CI]: 1.57 [1.19, 2.06] in men; 1.84 [1.36, 2.50] in women). For patients with rectal cancer, diabetes was not associated with differential survival in any time interval. Poorer survival observed for colon cancer associated with type 2 diabetes in Scotland may be explained by higher mortality from causes other than cancer.
Minimax rational approximation of the Fermi-Dirac distribution
Moussa, Jonathan E.
2016-10-27
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ –1)) poles to achieve an error tolerance ϵ at temperature β –1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δ occ, the occupied energy interval. Furthermore, this is particularly beneficial when Δ >> Δ occ, such as in electronic structure calculations that use a large basis set.
Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan
2016-01-01
Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266
Quality assessment of butter cookies applying multispectral imaging
Andresen, Mette S; Dissing, Bjørn S; Løje, Hanne
2013-01-01
A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were calculated for a statistical prediction model correlating multispectral images with a browning score. The browning score is calculated as a function of oven temperature and baking time. It is presented as a quadratic response surface. The investigated process window was the intervals 4–16 min and 160–200°C in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis showed that the most significant wavelengths for browning predictions were in the interval 400–700 nm and the wavelengths significant for water prediction were primarily located in the near-infrared spectrum. The water prediction model was found to correctly estimate the average water content with an absolute error of 0.22%. From the images it was also possible to follow the browning and drying propagation from the cookie edge toward the center. PMID:24804036
Weatherbee, Courtney R.; Pechal, Jennifer L.; Stamper, Trevor; Benbow, M. Eric
2017-01-01
Common forensic entomology practice has been to collect the largest Diptera larvae from a scene and use published developmental data, with temperature data from the nearest weather station, to estimate larval development time and post-colonization intervals (PCIs). To evaluate the accuracy of PCI estimates among Calliphoridae species and spatially distinct temperature sources, larval communities and ambient air temperature were collected at replicate swine carcasses (N = 6) throughout decomposition. Expected accumulated degree hours (ADH) associated with Cochliomyia macellaria and Phormia regina third instars (presence and length) were calculated using published developmental data sets. Actual ADH ranges were calculated using temperatures recorded from multiple sources at varying distances (0.90 m–7.61 km) from the study carcasses: individual temperature loggers at each carcass, a local weather station, and a regional weather station. Third instars greatly varied in length and abundance. The expected ADH range for each species successfully encompassed the average actual ADH for each temperature source, but overall under-represented the range. For both calliphorid species, weather station data were associated with more accurate PCI estimates than temperature loggers associated with each carcass. These results provide an important step towards improving entomological evidence collection and analysis techniques, and developing forensic error rates. PMID:28375172
NASA Astrophysics Data System (ADS)
Fatkullin, M. N.; Solodovnikov, G. K.; Trubitsyn, V. M.
2004-01-01
The results of developing the empirical model of parameters of radio signals propagating in the inhomogeneous ionosphere at middle and high latitudes are presented. As the initial data we took the homogeneous data obtained as a result of observations carried out at the Antarctic ``Molodezhnaya'' station by the method of continuous transmission probing of the ionosphere by signals of the satellite radionavigation ``Transit'' system at coherent frequencies of 150 and 400 MHz. The data relate to the summer season period in the Southern hemisphere of the Earth in 1988-1989 during high (F > 160) activity of the Sun. The behavior of the following statistical characteristics of radio signal parameters was analyzed: (a) the interval of correlation of fluctuations of amplitudes at a frequency of 150 MHz (τkA) (b) the interval of correlation of fluctuations of the difference phase (τkϕ) and (c) the parameter characterizing frequency spectra of amplitude (PA) and phase (Pϕ) fluctuations. A third-degree polynomial was used for modeling of propagation parameters. For all above indicated propagation parameters, the coefficients of the third-degree polynomial were calculated as a function of local time and magnetic activity. The results of calculations are tabulated.
Zilg, B; Bernard, S; Alkass, K; Berg, S; Druid, H
2015-09-01
Analysis of potassium concentration in the vitreous fluid of the eye is frequently used by forensic pathologists to estimate the postmortem interval (PMI), particularly when other methods commonly used in the early phase of an investigation can no longer be applied. The postmortem rise in vitreous potassium has been recognized for several decades and is readily explained by a diffusion of potassium from surrounding cells into the vitreous fluid. However, there is no consensus regarding the mathematical equation that best describes this increase. The existing models assume a linear increase, but different slopes and starting points have been proposed. In this study, vitreous potassium levels, and a number of factors that may influence these levels, were examined in 462 cases with known postmortem intervals that ranged from 2h to 17 days. We found that the postmortem rise in potassium followed a non-linear curve and that decedent age and ambient temperature influenced the variability by 16% and 5%, respectively. A long duration of agony and a high alcohol level at the time of death contributed less than 1% variability, and evaluation of additional possible factors revealed no detectable impact on the rise of vitreous potassium. Two equations were subsequently generated, one that represents the best fit of the potassium concentrations alone, and a second that represents potassium concentrations with correction for decedent age and/or ambient temperature. The former was associated with narrow confidence intervals in the early postmortem phase, but the intervals gradually increased with longer PMIs. For the latter equation, the confidence intervals were reduced at all PMIs. Therefore, the model that best describes the observed postmortem rise in vitreous potassium levels includes potassium concentration, decedent age, and ambient temperature. Furthermore, the precision of these equations, particularly for long PMIs, is expected to gradually improve by adjusting the constants as more reference data are added over time. A web application that facilitates this calculation process and allows for such future modifications has been developed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
NASA Technical Reports Server (NTRS)
Sadeh, D.; Shannon, D. C.; Abboud, S.; Akselrod, S.; Cohen, R. J.
1987-01-01
The ability of the autonomic nervous system to alter the QT interval in response to heart rate changes is essential to cardiovascular control. An accurate way to determine the relation between QT intervals and their corresponding RR intervals is described. A computer algorithm measures the RR intervals using digital filtering and cross-correlating the QRS sections of consecutive waveforms. The QT intervals is calculated by choosing a section of, the ECG that includes the T wave and cross-correlating it with all the consecutive T waves. At least 4000 pairs of QT-RR intervals are computed for each subject and a best fit correlation function determines the relations between the QT and RR intervals. This technique enables to establish a precise correlation between RR and QT in order to distinguish between control and SIDS babies.
Effects of a temperature-dependent rheology on large scale continental extension
NASA Technical Reports Server (NTRS)
Sonder, Leslie J.; England, Philip C.
1988-01-01
The effects of a temperature-dependent rheology on large-scale continental extension are investigated using a thin viscous sheet model. A vertically-averaged rheology is used that is consistent with laboratory experiments on power-law creep of olivine and that depends exponentially on temperature. Results of the calculations depend principally on two parameters: the Peclet number, which describes the relative rates of advection and diffusion of heat, and a dimensionless activation energy, which controls the temperature dependence of the rheology. At short times following the beginning of extension, deformation occurs with negligible change in temperature, so that only small changes in lithospheric strength occur due to attenuation of the lithosphere. However, after a certain critical time interval, thermal diffusion lowers temperatures in the lithosphere, strongly increasing lithospheric strength and slowing the rate of extension. This critical time depends principally on the Peclet number and is short compared with the thermal time constant of the lithosphere. The strength changes cause the locus of high extensional strain rates to shift with time from regions of high strain to regions of low strain. Results of the calculations are compared with observations from the Aegean, where maximum extensional strains are found in the south, near Crete, but maximum present-day strain rates are largest about 300 km further north.
Generation time and effective population size in Polar Eskimos.
Matsumura, Shuichi; Forster, Peter
2008-07-07
North Greenland Polar Eskimos are the only hunter-gatherer population, to our knowledge, who can offer precise genealogical records spanning several generations. This is the first report from Eskimos on two key parameters in population genetics, namely, generation time (T) and effective population size (Ne). The average mother-daughter and father-son intervals were 27 and 32 years, respectively, roughly similar to the previously published generation times obtained from recent agricultural societies across the world. To gain an insight for the generation time in our distant ancestors, we calculated maternal generation time for two wild chimpanzee populations. We also provide the first comparison among three distinct approaches (genealogy, variance and life table methods) for calculating Ne, which resulted in slightly differing values for the Eskimos. The ratio of the effective to the census population size is estimated as 0.6-0.7 for autosomal and X-chromosomal DNA, 0.7-0.9 for mitochondrial DNA and 0.5 for Y-chromosomal DNA. A simulation of alleles along the genealogy suggested that Y-chromosomal DNA may drift a little faster than mitochondrial DNA in this population, in contrast to agricultural Icelanders. Our values will be useful not only in prehistoric population inference but also in understanding the shaping of our genome today.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
Heart rate variability in the individual fetus.
Van Leeuwen, Peter; Cysarz, Dirk; Edelhäuser, Friedrich; Grönemeyer, Dietrich
2013-11-01
The change in fetal heart rate and its variability (HRV) during the course of gestation has been documented by numerous studies. The overall drop in heart rate and increase in fetal HRV is associated with fetal growth in general and with the increase in neural integration in particular. The increased complexity of the demands on the cardiovascular system leads to more variation in the temporal course of the heart rate. Most studies that document and interpret these changes are based on data acquired in groups of fetuses. The aim of this work was to investigate HRV within single fetuses. We acquired 213 5min fetal magnetocardiograms in 11 fetuses during the second and third trimesters (at least 10 data sets per fetus, median 17). From the magnetocardiograms we determined the fetal RR interval time series and calculated the standard deviation (SDNN), root mean square of successive differences (RMSSD), approximate entropy (ApEn) and temporal asymmetry (Irrev). For each subject and HRV measure, we performed regression analysis with respect to gestational age, alone and in combination with RR interval. The coefficient of determination R(2) was used to estimate goodness-of-fit. The coefficient of quartile dispersion (CQD) was used to compare the regression parameters for each HRV measure. Overall, the HRV measures increased with age and RR interval. The consistency of the HRV measures within the individual fetuses was greater than in the data pooled over all fetuses. The individual R(2) for the model including age and RR interval was best for ApEn (.79, .59-.94; median, 90% CI), followed by RMSSD (.71, .25-.88), SDNN (.55, .18-.90) and Irrev (.16, .01-.39). These values, except for Irrev, were higher than those calculated over all 213 data sets (R(2)=.65, .63, .35, .28, respectively). The slopes of the regressions of each individual's data were most consistent over all subjects for ApEn, followed by RMSSD and SDNN and Irrev. Interindividually, the time domain measures showed discrepancies and the within-fetus courses were more consistent than the course over all fetuses. On the other hand, the course of ApEn during gestation was not only very consistent within each fetus but also very similar between most of subjects. Complexity measures such as ApEn may thus more consistently reflect prenatal developmental factors influencing cardiovascular regulation. Copyright © 2013 Elsevier B.V. All rights reserved.
Singer, Daniel E; Hellkamp, Anne S; Yuan, Zhong; Lokhnygina, Yuliya; Patel, Manesh R; Piccini, Jonathan P; Hankey, Graeme J; Breithardt, Günter; Halperin, Jonathan L; Becker, Richard C; Hacke, Werner; Nessel, Christopher C; Mahaffey, Kenneth W; Fox, Keith A A; Califf, Robert M
2015-03-03
In the ROCKET AF (Rivaroxaban-Once-daily, oral, direct Factor Xa inhibition Compared with vitamin K antagonism for prevention of stroke and Embolism Trial in Atrial Fibrillation) trial, marked regional differences in control of warfarin anticoagulation, measured as the average individual patient time in the therapeutic range (iTTR) of the international normalized ratio (INR), were associated with longer inter-INR test intervals. The standard Rosendaal approach can produce biased low estimates of TTR after an appropriate dose change if the follow-up INR test interval is prolonged. We explored the effect of alternative calculations of TTR that more immediately account for dose changes on regional differences in mean iTTR in the ROCKET AF trial. We used an INR imputation method that accounts for dose change. We compared group mean iTTR values between our dose change-based method with the standard Rosendaal method and determined that the differences between approaches depended on the balance of dose changes that produced in-range INRs ("corrections") versus INRs that were out of range in the opposite direction ("overshoots"). In ROCKET AF, the overall mean iTTR of 55.2% (Rosendaal) increased up to 3.1% by using the dose change-based approach, depending on assumptions. However, large inter-regional differences in anticoagulation control persisted. TTR, the standard measure of control of warfarin anticoagulation, depends on imputing daily INR values for the vast majority of follow-up days. Our TTR calculation method may better reflect the impact of warfarin dose changes than the Rosendaal approach. In the ROCKET AF trial, this dose change-based approach led to a modest increase in overall mean iTTR but did not materially affect the large inter-regional differences previously reported. URL: ClinicalTrials.gov. Unique identifier: NCT00403767. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Singer, Daniel E.; Hellkamp, Anne S.; Yuan, Zhong; Lokhnygina, Yuliya; Patel, Manesh R.; Piccini, Jonathan P.; Hankey, Graeme J.; Breithardt, Günter; Halperin, Jonathan L.; Becker, Richard C.; Hacke, Werner; Nessel, Christopher C.; Mahaffey, Kenneth W.; Fox, Keith A. A.; Califf, Robert M.
2015-01-01
Background In the ROCKET AF (Rivaroxaban–Once‐daily, oral, direct Factor Xa inhibition Compared with vitamin K antagonism for prevention of stroke and Embolism Trial in Atrial Fibrillation) trial, marked regional differences in control of warfarin anticoagulation, measured as the average individual patient time in the therapeutic range (iTTR) of the international normalized ratio (INR), were associated with longer inter‐INR test intervals. The standard Rosendaal approach can produce biased low estimates of TTR after an appropriate dose change if the follow‐up INR test interval is prolonged. We explored the effect of alternative calculations of TTR that more immediately account for dose changes on regional differences in mean iTTR in the ROCKET AF trial. Methods and Results We used an INR imputation method that accounts for dose change. We compared group mean iTTR values between our dose change–based method with the standard Rosendaal method and determined that the differences between approaches depended on the balance of dose changes that produced in‐range INRs (“corrections”) versus INRs that were out of range in the opposite direction (“overshoots”). In ROCKET AF, the overall mean iTTR of 55.2% (Rosendaal) increased up to 3.1% by using the dose change–based approach, depending on assumptions. However, large inter‐regional differences in anticoagulation control persisted. Conclusions TTR, the standard measure of control of warfarin anticoagulation, depends on imputing daily INR values for the vast majority of follow‐up days. Our TTR calculation method may better reflect the impact of warfarin dose changes than the Rosendaal approach. In the ROCKET AF trial, this dose change–based approach led to a modest increase in overall mean iTTR but did not materially affect the large inter‐regional differences previously reported. Clinical Trial Registration URL: ClinicalTrials.gov. Unique identifier: NCT00403767. PMID:25736441
Symbol interval optimization for molecular communication with drift.
Kim, Na-Rae; Eckford, Andrew W; Chae, Chan-Byoung
2014-09-01
In this paper, we propose a symbol interval optimization algorithm in molecular communication with drift. Proper symbol intervals are important in practical communication systems since information needs to be sent as fast as possible with low error rates. There is a trade-off, however, between symbol intervals and inter-symbol interference (ISI) from Brownian motion. Thus, we find proper symbol interval values considering the ISI inside two kinds of blood vessels, and also suggest no ISI system for strong drift models. Finally, an isomer-based molecule shift keying (IMoSK) is applied to calculate achievable data transmission rates (achievable rates, hereafter). Normalized achievable rates are also obtained and compared in one-symbol ISI and no ISI systems.
Ratio-based lengths of intervals to improve fuzzy time series forecasting.
Huarng, Kunhuang; Yu, Tiffany Hui-Kuang
2006-04-01
The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.
NASA Astrophysics Data System (ADS)
Suzuki, W.; Aoi, S.; Maeda, T.; Sekiguchi, H.; Kunugi, T.
2013-12-01
Source inversion analysis using near-source strong-motion records with an assumption of 1-D underground structure models has revealed the overall characteristics of the rupture process of the 2011 Tohoku-Oki mega-thrust earthquake. This assumption for the structure model is acceptable because the seismic waves radiated during the Tohoku-Oki event were rich in the very-low-frequency contents lower than 0.05 Hz, which are less affected by the small-scale heterogeneous structure. The analysis using more reliable Green's functions even in the higher-frequency range considering complex structure of the subduction zone will illuminate more detailed rupture process in space and time and the transition of the frequency dependence of the wave radiation for the Tohoku-Oki earthquake. In this study, we calculate the near-source Green's functions using a 3-D underground structure model and perform the source inversion analysis using them. The 3-D underground structure model used in this study is the Japan Integrated Velocity Structure Model (Headquarters for Earthquake Research Promotion, 2012). A curved fault model on the Pacific plate interface is discretized into 287 subfaults at ~20 km interval. The Green's functions are calculated using GMS (Aoi et al., 2004), which is a simulation program package for the seismic wave field by the finite difference method using discontinuous grids (Aoi and Fujiwara, 1999). Computational region is 136-146.2E in longitude, 34-41.6N in latitude, and 0-100 km in depth. The horizontal and vertical grid intervals are 200 m and 100 m, respectively, for the shallower region and those for the deeper region are tripled. The number of the total grids is 2.1 billion. We derive 300-s records by calculating 36,000 steps with a time interval of 0.0083 second (120 Hz sampling). It takes nearly one hour to compute one case using 48 Graphics Processing Units (GPU) on TSUBAME2.0 supercomputer owned by Tokyo Institute of Technology. In total, 574 cases are calculated to derive the Green's functions for two basis slip angles from each subfault. A preliminary inversion result using the same frequency band and strong-motion stations as those for our previous studies (Suzuki et al., 2011; 2013) shows that, in addition to a large slip in the shallower area, a slip in the deeper part is relatively larger than the previous result, for the off Miyagi region. Characteristics of the temporal rupture progression are consistent with the previous studies. Our further study will consider the rupture propagation inside subfault, which would more appropriately evaluate the slip amount of the deeper area related to the rupture propagating landward. Improvement of waveform alignment is also necessary to reduce the influence of the discrepancy and uncertainty of the underground structure models used for the hypocenter determination and Green's function calculation. Acknowledgements: This research is partially supported by "Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures" and "High Performance Computing Infrastructure" in Japan.
Calip, Gregory S; Adimadhyam, Sruthi; Xing, Shan; Rincon, Julian C; Lee, Wan-Ju; Anguiano, Rebekah H
2017-10-01
Self-injectable TNF inhibitors are increasingly used early in the chronic treatment of moderate to severe rheumatologic conditions. We estimated medication adherence/persistence over time following initiation in young adult and older adult patients with rheumatoid arthritis, ankylosing spondylitis or psoriatic arthritis. We conducted a retrospective cohort study of patients aged 18+ years newly initiating etanercept, adalimumab, certolizumab pegol, or golimumab using the Truven Health MarketScan Database between 2009 and 2013. Pharmacy dispensing data were used to calculate 12-month medication possession ratios (MPR) and determine adherence (MPR ≥ 0.80) for up to 3 years after starting therapy. Persistence over each 12-month interval was defined as not having a ≥92-day treatment gap. Multivariable generalized estimating equation models were used to calculate odds ratios (OR) and robust 95% confidence intervals (CI) for associations between patient characteristics and repeated adherence/persistence measures over time. Among 53,477 new users, 14% were young adults (18-34 years), 49% middle-aged (35-54 years), and 37% older adults (55+ years). Overall, 37% of patients were adherent and 83% were persistent in the first year of therapy. The lowest adherence (17%) and persistence (70%) were observed among young adult patients by Year +3. Compared to older adults, middle-aged (OR = 0.73, 95% CI: 0.71-0.76) and young adults (OR = 0.50, 95% CI: 0.47-0.53) were less likely to be adherent. Higher Charlson comorbidity scores, hospitalizations, and emergency department visits were associated with non-adherence/non-persistence. We observed low adherence to self-administered TNF inhibitors but most patients remained persistent over time. Further efforts to improve adherence in young adults and patients with greater comorbidity are needed. Copyright © 2017 Elsevier Inc. All rights reserved.
Extensions of the MCNP5 and TRIPOLI4 Monte Carlo Codes for Transient Reactor Analysis
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Sjenitzer, Bart L.
2014-06-01
To simulate reactor transients for safety analysis with the Monte Carlo method the generation and decay of delayed neutron precursors is implemented in the MCNP5 and TRIPOLI4 general purpose Monte Carlo codes. Important new variance reduction techniques like forced decay of precursors in each time interval and the branchless collision method are included to obtain reasonable statistics for the power production per time interval. For simulation of practical reactor transients also the feedback effect from the thermal-hydraulics must be included. This requires coupling of the Monte Carlo code with a thermal-hydraulics (TH) code, providing the temperature distribution in the reactor, which affects the neutron transport via the cross section data. The TH code also provides the coolant density distribution in the reactor, directly influencing the neutron transport. Different techniques for this coupling are discussed. As a demonstration a 3x3 mini fuel assembly with a moving control rod is considered for MCNP5 and a mini core existing of 3x3 PWR fuel assemblies with control rods and burnable poisons for TRIPOLI4. Results are shown for reactor transients due to control rod movement or withdrawal. The TRIPOLI4 transient calculation is started at low power and includes thermal-hydraulic feedback. The power rises about 10 decades and finally stabilises the reactor power at a much higher level than initial. The examples demonstrate that the modified Monte Carlo codes are capable of performing correct transient calculations, taking into account all geometrical and cross section detail.
Automatic Blood Pressure Measurements During Exercise
NASA Technical Reports Server (NTRS)
Weaver, Charles S.
1985-01-01
Microprocessor circuits and a computer algorithm for automatically measuring blood pressure during ambulatory monitoring and exercise stress testing have been under development at SRI International. A system that records ECG, Korotkov sound, and arm cuff pressure for off-line calculation of blood pressure has been delivered to NASA, and an LSLE physiological monitoring system that performs the algorithm calculations in real-time is being constructed. The algorithm measures the time between the R-wave peaks and the corresponding Korotkov sound on-set (RK-interval). Since the curve of RK-interval versus cuff pressure during deflation is predictable and slowly varying, windows can be set around the curve to eliminate false Korotkov sound detections that result from noise. The slope of this curve, which will generally decrease during exercise, is the inverse of the systolic slope of the brachial artery pulse. In measurements taken during treadmill stress testing, the changes in slopes of subjects with coronary artery disease were markedly different from the changes in slopes of healthy subjects. Measurements of slope and O2 consumption were also made before and after ten days of bed rest during NASA/Ames Research Center bed rest studies. Typically, the maximum rate of O2 consumption during the post-bed rest test is less than the maximum rate during the pre-bed rest test. The post-bed rest slope changes differ from the pre-bed rest slope changes, and the differences are highly correlated with the drop in the maximum rate of O2 consumption. We speculate that the differences between pre- and post-bed rest slopes are due to a drop in heart contractility.
Sadler, Sean G; Hawke, Fiona E; Chuter, Vivienne H
2015-10-01
Evaluation of peripheral blood supply is fundamental to risk categorization and subsequent ongoing monitoring of patients with lower extremity peripheral arterial disease. Toe systolic blood pressure (TSBP) and the toe brachial index (TBI) are both valid and reliable vascular screening techniques that are commonly used in clinical practice. However, the effect of pretest rest duration on the magnitude of these measurements is unclear. Eighty individuals meeting current guidelines for lower extremity peripheral arterial disease screening volunteered to participate. The Systoe and MicroLife automated devices were used to measure toe and brachial systolic blood pressures, respectively, following 5, 10 and 15 min of rest in a horizontal supine position. A ratio of TSBP to brachial pressure was used to calculate the TBI and change in TBI at each time interval was investigated. A significant increase in TSBP [3.66 mmHg; 95% confidence interval (CI): 1.44-5.89; P≤0.001] and the TBI (0.03; 95% CI: 0.01-0.05; P≤0.001) occurred between 5 and 10 min. Between 10 and 15 min, there was a nonsignificant decrease in TSBP (-0.73 mmHg; 95% CI: -1.48 to 2.93; P=1.000) and the TBI (0.00; 95% CI: -0.02 to 0.02; P=1.000). Ten minutes of pretest rest is recommended for measurement of TSBP and for both pressure measurements used in the calculation of a TBI to ensure that stable pressures are measured.
Satellite-based drought monitoring in Kenya in an operational setting
NASA Astrophysics Data System (ADS)
Klisch, A.; Atzberger, C.; Luminari, L.
2015-04-01
The University of Natural Resources and Life Sciences (BOKU) in Vienna (Austria) in cooperation with the National Drought Management Authority (NDMA) in Nairobi (Kenya) has setup an operational processing chain for mapping drought occurrence and strength for the territory of Kenya using the Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI at 250 m ground resolution from 2000 onwards. The processing chain employs a modified Whittaker smoother providing consistent NDVI "Mondayimages" in near real-time (NRT) at a 7-daily updating interval. The approach constrains temporally extrapolated NDVI values based on reasonable temporal NDVI paths. Contrary to other competing approaches, the processing chain provides a modelled uncertainty range for each pixel and time step. The uncertainties are calculated by a hindcast analysis of the NRT products against an "optimum" filtering. To detect droughts, the vegetation condition index (VCI) is calculated at pixel level and is spatially aggregated to administrative units. Starting from weekly temporal resolution, the indicator is also aggregated for 1- and 3-monthly intervals considering available uncertainty information. Analysts at NDMA use the spatially/temporally aggregated VCI and basic image products for their monthly bulletins. Based on the provided bio-physical indicators as well as a number of socio-economic indicators, contingency funds are released by NDMA to sustain counties in drought conditions. The paper shows the successful application of the products within NDMA by providing a retrospective analysis applied to droughts in 2006, 2009 and 2011. Some comparisons with alternative products (e.g. FEWS NET, the Famine Early Warning Systems Network) highlight main differences.
Baxter, Suzanne Domel; Hardin, James W; Guinn, Caroline H; Royer, Julie A; Mackelprang, Alyssa J; Smith, Albert F
2009-05-01
For a 24-hour dietary recall, two possible target periods are the prior 24 hours (24 hours immediately preceding the interview time) and previous day (midnight to midnight of the day before the interview), and three possible interview times are morning, afternoon, and evening. Target period and interview time determine the retention interval (elapsed time between to-be-reported meals and the interview), which, along with intervening meals, can influence reporting accuracy. The effects of target period and interview time on children's accuracy for reporting school meals during 24-hour dietary recalls were investigated. DESIGN AND SUBJECTS/SETTING: During the 2004-2005, 2005-2006, and 2006-2007 school years in Columbia, SC, each of 374 randomly selected fourth-grade children (96% African American) was observed eating two consecutive school meals (breakfast and lunch) and interviewed to obtain a 24-hour dietary recall using one of six conditions defined by crossing two target periods with three interview times. Each condition had 62 or 64 children (half boys). Accuracy for reporting school meals was quantified by calculating rates for omissions (food items observed eaten but unreported) and intrusions (food items reported eaten but unobserved); a measure of total inaccuracy combined errors for reporting food items and amounts. For each accuracy measure, analysis of variance was conducted with target period, interview time, their interaction, sex, interviewer, and school year in the model. There was a target-period effect and a target-period by interview-time interaction on omission rates, intrusion rates, and total inaccuracy (six P values <0.004). For prior-24-hour recalls compared to previous-day recalls, and for prior-24-hour recalls in the afternoon and evening compared to previous-day recalls in the afternoon and evening, omission rates were better by one third, intrusion rates were better by one half, and total inaccuracy was better by one third. To enhance children's dietary recall accuracy, target periods and interview times that minimize the retention interval should be chosen.
Kim, Tae Kyung; Kim, Hyung Wook; Kim, Su Jin; Ha, Jong Kun; Jang, Hyung Ha; Hong, Young Mi; Park, Su Bum; Choi, Cheol Woong; Kang, Dae Hwan
2014-01-01
Background/Aims The quality of bowel preparation (QBP) is the important factor in performing a successful colonoscopy. Several factors influencing QBP have been reported; however, some factors, such as the optimal preparation-to-colonoscopy time interval, remain controversial. This study aimed to determine the factors influencing QBP and the optimal time interval for full-dose polyethylene glycol (PEG) preparation. Methods A total of 165 patients who underwent colonoscopy from June 2012 to August 2012 were prospectively evaluated. The QBP was assessed using the Ottawa Bowel Preparation Scale (Ottawa) score according to several factors influencing the QBP were analyzed. Results Colonoscopies with a time interval of 5 to 6 hours had the best Ottawa score in all parts of the colon. Patients with time intervals of 6 hours or less had the better QBP than those with time intervals of more than 6 hours (p=0.046). In the multivariate analysis, the time interval (odds ratio, 1.897; 95% confidence interval, 1.006 to 3.577; p=0.048) was the only significant contributor to a satisfactory bowel preparation. Conclusions The optimal time was 5 to 6 hours for the full-dose PEG method, and the time interval was the only significant contributor to a satisfactory bowel preparation. PMID:25368750
Patel, Sanjay R.; Weng, Jia; Rueschman, Michael; Dudley, Katherine A.; Loredo, Jose S.; Mossavar-Rahmani, Yasmin; Ramirez, Maricelle; Ramos, Alberto R.; Reid, Kathryn; Seiger, Ashley N.; Sotres-Alvarez, Daniela; Zee, Phyllis C.; Wang, Rui
2015-01-01
Study Objectives: While actigraphy is considered objective, the process of setting rest intervals to calculate sleep variables is subjective. We sought to evaluate the reproducibility of actigraphy-derived measures of sleep using a standardized algorithm for setting rest intervals. Design: Observational study. Setting: Community-based. Participants: A random sample of 50 adults aged 18–64 years free of severe sleep apnea participating in the Sueño sleep ancillary study to the Hispanic Community Health Study/Study of Latinos. Interventions: N/A. Measurements and Results: Participants underwent 7 days of continuous wrist actigraphy and completed daily sleep diaries. Studies were scored twice by each of two scorers. Rest intervals were set using a standardized hierarchical approach based on event marker, diary, light, and activity data. Sleep/wake status was then determined for each 30-sec epoch using a validated algorithm, and this was used to generate 11 variables: mean nightly sleep duration, nap duration, 24-h sleep duration, sleep latency, sleep maintenance efficiency, sleep fragmentation index, sleep onset time, sleep offset time, sleep midpoint time, standard deviation of sleep duration, and standard deviation of sleep midpoint. Intra-scorer intraclass correlation coefficients (ICCs) were high, ranging from 0.911 to 0.995 across all 11 variables. Similarly, inter-scorer ICCs were high, also ranging from 0.911 to 0.995, and mean inter-scorer differences were small. Bland-Altman plots did not reveal any systematic disagreement in scoring. Conclusions: With use of a standardized algorithm to set rest intervals, scoring of actigraphy for the purpose of generating a wide array of sleep variables is highly reproducible. Citation: Patel SR, Weng J, Rueschman M, Dudley KA, Loredo JS, Mossavar-Rahmani Y, Ramirez M, Ramos AR, Reid K, Seiger AN, Sotres-Alvarez D, Zee PC, Wang R. Reproducibility of a standardized actigraphy scoring algorithm for sleep in a US Hispanic/Latino population. SLEEP 2015;38(9):1497–1503. PMID:25845697
Assessing the Impact of Different Measurement Time Intervals on Observed Long-Term Wind Speed Trends
NASA Astrophysics Data System (ADS)
Azorin-Molina, C.; Vicente-Serrano, S. M.; McVicar, T.; Jerez, S.; Revuelto, J.; López Moreno, J. I.
2014-12-01
During the last two decades climate studies have reported a tendency toward a decline in measured near-surface wind speed in some regions of Europe, North America, Asia and Australia. This weakening in observed wind speed has been recently termed "global stilling", showing a worldwide average trend of -0.140 m s-1 dec-1 during last 50-years. The precise cause of the "global stilling" remains largely uncertain and has been hypothetically attributed to several factors, mainly related to: (i) an increasing surface roughness (i.e. forest growth, land use changes, and urbanization); (ii) a slowdown in large-scale atmospheric circulation; (iii) instrumental drifts and technological improvements, maintenance, and shifts in measurements sites and calibration issues; (iv) sunlight dimming due to air pollution; and (v) astronomical changes. This study proposed a novel investigation aimed at analyzing how different measurement time intervals used to calculate a wind speed series can affect the sign and magnitude of long-term wind speed trends. For instance, National Weather Services across the globe estimate daily average wind speed using different time intervals and formulae that may affect the trend results. Firstly, we carried out a comprehensive review of wind studies reporting the sign and magnitude of wind speed trend and the sampling intervals used. Secondly, we analyzed near-surface wind speed trends recorded at 59 land-based stations across Spain comparing monthly mean wind speed series obtained from: (a) daily mean wind speed data averaged from standard 10-min mean observations at 0000, 0700, 1300 and 1800 UTC; and (b) average wind speed of 24 hourly measurements (i.e., wind run measurements) from 0000 to 2400 UTC. Thirdly and finally, we quantified the impact of anemometer drift (i.e. bearing malfunction) by presenting preliminary results (1-year of paired measurements) from a comparison of one new anemometer sensor against one malfunctioned anenometer sensor due to old bearings.
Steiner, Markus FC; Cezard, Genevieve; Bansal, Narinder; Fischbacher, Colin; Douglas, Anne; Bhopal, Raj; Sheikh, Aziz
2015-01-01
Objective There is evidence of substantial ethnic variations in asthma morbidity and the risk of hospitalisation, but the picture in relation to lower respiratory tract infections is unclear. We carried out an observational study to identify ethnic group differences for lower respiratory tract infections. Design A retrospective, cohort study. Setting Scotland. Participants 4.65 million people on whom information was available from the 2001 census, followed from May 2001 to April 2010. Main outcome measures Hospitalisations and deaths (any time following first hospitalisation) from lower respiratory tract infections, adjusted risk ratios and hazard ratios by ethnicity and sex were calculated. We multiplied ratios and confidence intervals by 100, so the reference Scottish White population’s risk ratio and hazard ratio was 100. Results Among men, adjusted risk ratios for lower respiratory tract infection hospitalisation were lower in Other White British (80, 95% confidence interval 73–86) and Chinese (69, 95% confidence interval 56–84) populations and higher in Pakistani groups (152, 95% confidence interval 136–169). In women, results were mostly similar to those in men (e.g. Chinese 68, 95% confidence interval 56–82), although higher adjusted risk ratios were found among women of the Other South Asians group (145, 95% confidence interval 120–175). Survival (adjusted hazard ratio) following lower respiratory tract infection for Pakistani men (54, 95% confidence interval 39–74) and women (31, 95% confidence interval 18–53) was better than the reference population. Conclusions Substantial differences in the rates of lower respiratory tract infections amongst different ethnic groups in Scotland were found. Pakistani men and women had particularly high rates of lower respiratory tract infection hospitalisation. The reasons behind the high rates of lower respiratory tract infection in the Pakistani community are now required. PMID:26152675
Effects of age and recovery duration on peak power output during repeated cycling sprints.
Ratel, S; Bedu, M; Hennegrave, A; Doré, E; Duché, P
2002-08-01
The aim of the present study was to investigate the effects of age and recovery duration on the time course of cycling peak power and blood lactate concentration ([La]) during repeated bouts of short-term high-intensity exercise. Eleven prepubescent boys (9.6 +/- 0.7 yr), nine pubescent boys (15.0 +/- 0.7 yr) and ten men (20.4 +/- 0.8 yr) performed ten consecutive 10 s cycling sprints separated by either 30 s (R30), 1 min (R1), or 5 min (R5) passive recovery intervals against a friction load corresponding to 50 % of their optimal force (50 % Ffopt). Peak power produced at 50 % Ffopt (PP50) was calculated at each sprint including the flywheel inertia of the bicycle. Arterialized capillary blood samples were collected at rest and during the sprint exercises to measure the time course of [La]. In the prepubescent boys, whatever recovery intervals, PP50 remained unchanged during the ten 10 s sprint exercises. In the pubescent boys, PP50 decreased significantly by 18.5 % (p < 0.001) with R30 and by 15.3 % (p < 0.01) with R1 from the first to the tenth sprint but remained unchanged with R5. In the men, PP50 decreased respectively by 28.5 % (p < 0.001) and 11.3 % (p < 0.01) with R30 and R1 and slightly diminished with R5. For each recovery interval, the increase in blood [La] over the ten sprints was significantly lower in the prepubescent boys compared with the pubescent boys and the men. To conclude, the prepubescent boys sustained their PP50 during the ten 10 s sprint exercises with only 30 s recovery intervals. In contrast, the pubescent boys and the men needed 5 min recovery intervals. It was suggested that the faster recovery of PP50 in the prepubescent boys was due to their lower muscle glycolytic activity and their higher muscle oxidative capacity allowing a faster resynthesis in phosphocreatine.
Interresponse Time Structures in Variable-Ratio and Variable-Interval Schedules
ERIC Educational Resources Information Center
Bowers, Matthew T.; Hill, Jade; Palya, William L.
2008-01-01
The interresponse-time structures of pigeon key pecking were examined under variable-ratio, variable-interval, and variable-interval plus linear feedback schedules. Whereas the variable-ratio and variable-interval plus linear feedback schedules generally resulted in a distinct group of short interresponse times and a broad distribution of longer…
QT Adaptation and Intrinsic QT Variability in Congenital Long QT Syndrome.
Seethala, Srikanth; Singh, Prabhpreet; Shusterman, Vladimir; Ribe, Margareth; Haugaa, Kristina H; Němec, Jan
2015-12-16
Increased variability of QT interval (QTV) has been linked to arrhythmias in animal experiments and multiple clinical situations. Congenital long QT syndrome (LQTS), a pure repolarization disease, may provide important information on the relationship between delayed repolarization and QTV. Twenty-four-hour Holter monitor tracings from 78 genotyped congenital LQTS patients (52 females; 51 LQT1, 23 LQT2, 2 LQT5, 2 JLN, 27 symptomatic; age, 35.2±12.3 years) were evaluated with computer-assisted annotation of RR and QT intervals. Several models of RR-QT relationship were tested in all patients. A model assuming exponential decrease of past RR interval contributions to QT duration with 60-second time constant provided the best data fit. This model was used to calculate QTc and residual "intrinsic" QTV, which cannot be explained by heart rate change. The intrinsic QTV was higher in patients with long QTc (r=0.68; P<10(-4)), and in LQT2 than in LQT1/5 patients (5.65±1.28 vs 4.46±0.82; P<0.0002). Both QTc and intrinsic QTV were similar in symptomatic and asymptomatic patients (467±52 vs 459±53 ms and 5.10±1.19 vs 4.74±1.09, respectively). In LQTS patients, QT interval adaptation to heart rate changes occurs with time constant ≈60 seconds, similar to results reported in control subjects. Intrinsic QTV correlates with the degree of repolarization delay and might reflect action potential instability observed in animal models of LQTS. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Uncertainty analysis for absorbed dose from a brain receptor imaging agent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aydogan, B.; Miller, L.F.; Sparks, R.B.
Absorbed dose estimates are known to contain uncertainties. A recent literature search indicates that prior to this study no rigorous investigation of uncertainty associated with absorbed dose has been undertaken. A method of uncertainty analysis for absorbed dose calculations has been developed and implemented for the brain receptor imaging agent {sup 123}I-IPT. The two major sources of uncertainty considered were the uncertainty associated with the determination of residence time and that associated with the determination of the S values. There are many sources of uncertainty in the determination of the S values, but only the inter-patient organ mass variation wasmore » considered in this work. The absorbed dose uncertainties were determined for lung, liver, heart and brain. Ninety-five percent confidence intervals of the organ absorbed dose distributions for each patient and for a seven-patient population group were determined by the ``Latin Hypercube Sampling`` method. For an individual patient, the upper bound of the 95% confidence interval of the absorbed dose was found to be about 2.5 times larger than the estimated mean absorbed dose. For the seven-patient population the upper bound of the 95% confidence interval of the absorbed dose distribution was around 45% more than the estimated population mean. For example, the 95% confidence interval of the population liver dose distribution was found to be between 1.49E+0.7 Gy/MBq and 4.65E+07 Gy/MBq with a mean of 2.52E+07 Gy/MBq. This study concluded that patients in a population receiving {sup 123}I-IPT could receive absorbed doses as much as twice as large as the standard estimated absorbed dose due to these uncertainties.« less
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lyubushin, Alexey
2016-04-01
The problem of estimate of current seismic danger based on monitoring of seismic noise properties from broadband seismic network F-net in Japan (84 stations) is considered. Variations of the following seismic noise parameters are analyzed: multifractal singularity spectrum support width, generalized Hurst exponent, minimum Hölder-Lipschitz exponent and minimum normalized entropy of squared orthogonal wavelet coefficients. These parameters are estimated within adjacent time windows of the length 1 day for seismic noise waveforms from each station. Calculating daily median values of these parameters by all stations provides 4-dimensional time series which describes integral properties of the seismic noise in the region covered by the network. Cluster analysis is applied to the sequence of clouds of 4-dimensional vectors within moving time window of the length 365 days with mutual shift 3 days starting from the beginning of 1997 up to the current time. The purpose of the cluster analysis is to find the best number of clusters (BNC) from probe numbers which are varying from 1 up to the maximum value 40. The BNC is found from the maximum of pseudo-F-statistics (PFS). A 2D map could be created which presents dependence of PFS on the tested probe number of clusters and the right-hand end of moving time window which is rather similar to usual spectral time-frequency diagrams. In the paper [1] it was shown that the BNC before Tohoku mega-earthquake on March 11, 2011, has strongly chaotic regime with jumps from minimum up to maximum values in the time interval 1 year before the event and this time intervals was characterized by high PFS values. The PFS-map is proposed as the method for extracting time intervals with high current seismic danger. The next danger time interval after Tohoku mega-EQ began at the end of 2012 and was finished at the middle of 2013. Starting from middle of 2015 the high PFS values and chaotic regime of BNC variations were returned. This could be interpreted as the increasing of the danger of the next mega-EQ in Japan in the region of Nankai Trough [1] at the first half of 2016. References 1. Lyubushin, A., 2013. How soon would the next mega-earthquake occur in Japan? // Natural Science, 5 (8A1), 1-7. http://dx.doi.org/10.4236/ns.2013.58A1001
Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine
2011-03-01
International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.
Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J
2015-09-01
This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.
Discussion on calculation of disease severity index values from scales with unequal intervals
USDA-ARS?s Scientific Manuscript database
When estimating severity of disease, a disease interval (or category) scale comprises a number of categories of known numeric values – with plant disease this is generally the percent area with symptoms (e.g., the Horsfall-Barratt (H-B) scale). Studies in plant pathology and plant breeding often use...
Guide to luminescence dating techniques and their application for paleoseismic research
Gray, Harrison J.; Mahan, Shannon; Rittenour, Tammy M.; Nelson, Michelle Summa; Lund, William R.
2015-01-01
Over the past 25 years, luminescence dating has become a key tool for dating sediments of interest in paleoseismic research. The data obtained from luminescence dating has been used to determine timing of fault displacement, calculate slip rates, and estimate earthquake recurrence intervals. The flexibility of luminescence is a key complement to other chronometers such as radiocarbon or cosmogenic nuclides. Careful sampling and correct selection of sample sites exert two of the strongest controls on obtaining an accurate luminescence age. Factors such as partial bleaching and post-depositional mixing should be avoided during sampling and special measures may be needed to help correct for associated problems. Like all geochronologic techniques, context is necessary for interpreting and calculating luminescence results and this can be achieved by supplying participating labs with associated trench logs, photos, and stratigraphic locations of sample sites.
Duration of mineralization and fluid-flow history of the Upper Mississippi Valley zinc-lead district
Rowan, E.L.; Goldhaber, M.B.
1995-01-01
Studies of fluid inclusions in sphalerite and biomarkers from the Upper Mississippi Valley zinc district show homogenization temperatures to be primarily between 90 and 150??C, yet show relatively low levels of thermal maturity. Numerical calculations are used to simulate fluid and heat flow through fracture-controlled ore zones and heat transfer to the adjacent rocks. Combining a best-fit path through fluid-inclusion data with measured thermal alteration of biomarkers, the time interval during which mineralizing fluids circulated through the Upper Mississippi Valley district was calculated to be on the order of 200 ka. Cambrian and Ordovician aquifers underlying the district, principally the St. Peter and Mt. Simon Sandstones, were the source of the mineralizing fluid. The duration of mineralization thus reflects the fluid-flow history of these regional aquifers. -from Authors
Not All Prehospital Time is Equal: Influence of Scene Time on Mortality
Brown, Joshua B.; Rosengart, Matthew R.; Forsythe, Raquel M.; Reynolds, Benjamin R.; Gestring, Mark L.; Hallinan, William M.; Peitzman, Andrew B.; Billiar, Timothy R.; Sperry, Jason L.
2016-01-01
Background Trauma is time-sensitive and minimizing prehospital (PH) time is appealing. However, most studies have not linked increasing PH time with worse outcomes, as raw PH times are highly variable. It is unclear whether specific PH time patterns affect outcomes. Our objective was to evaluate the association of PH time interval distribution with mortality. Methods Patients transported by EMS in the Pennsylvania trauma registry 2000-2013 with total prehospital time (TPT)≥20min were included. TPT was divided into three PH time intervals: response, scene, and transport time. The number of minutes in each PH time interval was divided by TPT to determine the relative proportion each interval contributed to TPT. A prolonged interval was defined as any one PH interval contributing ≥50% of TPT. Patients were classified by prolonged PH interval or no prolonged PH interval (all intervals<50% of TPT). Patients were matched for TPT and conditional logistic regression determined the association of mortality with PH time pattern, controlling for confounders. PH interventions were explored as potential mediators, and prehospital triage criteria used identify patients with time-sensitive injuries. Results There were 164,471 patients included. Patients with prolonged scene time had increased odds of mortality (OR 1.21; 95%CI 1.02–1.44, p=0.03). Prolonged response, transport, and no prolonged interval were not associated with mortality. When adjusting for mediators including extrication and PH intubation, prolonged scene time was no longer associated with mortality (OR 1.06; 0.90–1.25, p=0.50). Together these factors mediated 61% of the effect between prolonged scene time and mortality. Mortality remained associated with prolonged scene time in patients with hypotension, penetrating injury, and flail chest. Conclusions Prolonged scene time is associated with increased mortality. PH interventions partially mediate this association. Further study should evaluate whether these interventions drive increased mortality because they prolong scene time or by another mechanism, as reducing scene time may be a target for intervention. Level of Evidence IV, prognostic study PMID:26886000
Analysis of trend changes in Northern African palaeo-climate by using Bayesian inference
NASA Astrophysics Data System (ADS)
Schütz, Nadine; Trauth, Martin H.; Holschneider, Matthias
2010-05-01
Climate variability of Northern Africa is of high interest due to climate-evolutionary linkages under study. The reconstruction of the palaeo-climate over long time scales, including the expected linkages (> 3 Ma), is mainly accessible by proxy data from deep sea drilling cores. By concentrating on published data sets, we try to decipher rhythms and trends to detect correlations between different proxy time series by advanced mathematical methods. Our preliminary data is dust concentration, as an indicator for climatic changes such as humidity, from the ODP sites 659, 721 and 967 situated around Northern Africa. Our interest is in challenging the available time series with advanced statistical methods to detect significant trend changes and to compare different model assumptions. For that purpose, we want to avoid the rescaling of the time axis to obtain equidistant time steps for filtering methods. Additionally we demand an plausible description of the errors for the estimated parameters, in terms of confidence intervals. Finally, depending on what model we restrict on, we also want an insight in the parameter structure of the assumed models. To gain this information, we focus on Bayesian inference by formulating the problem as a linear mixed model, so that the expectation and deviation are of linear structure. By using the Bayesian method we can formulate the posteriori density as a function of the model parameters and calculate this probability density in the parameter space. Depending which parameters are of interest, we analytically and numerically marginalize the posteriori with respect to the remaining parameters of less interest. We apply a simple linear mixed model to calculate the posteriori densities of the ODP sites 659 and 721 concerning the last 5 Ma at maximum. From preliminary calculations on these data sets, we can confirm results gained by the method of breakfit regression combined with block bootstrapping ([1]). We obtain a significant change point around (1.63 - 1.82) Ma, which correlates with a global climate transition due to the establishment of the Walker circulation ([2]). Furthermore we detect another significant change point around (2.7 - 3.2) Ma, which correlates with the end of the Pliocene warm period (permanent El Niño-like conditions) and the onset of a colder global climate ([3], [4]). The discussion on the algorithm, the results of calculated confidence intervals, the available information about the applied model in the parameter space and the comparison of multiple change point models will be presented. [1] Trauth, M.H., et al., Quaternary Science Reviews, 28, 2009 [2] Wara, M.W., et al., Science, Vol. 309, 2005 [3] Chiang, J.C.H., Annual Review of Earth and Planetary Sciences, Vol. 37, 2009 [4] deMenocal, P., Earth and Planetary Science Letters, 220, 2004
Panek, Petr; Prochazka, Ivan
2007-09-01
This article deals with the time interval measurement device, which is based on a surface acoustic wave (SAW) filter as a time interpolator. The operating principle is based on the fact that a transversal SAW filter excited by a short pulse can generate a finite signal with highly suppressed spectra outside a narrow frequency band. If the responses to two excitations are sampled at clock ticks, they can be precisely reconstructed from a finite number of samples and then compared so as to determine the time interval between the two excitations. We have designed and constructed a two-channel time interval measurement device which allows independent timing of two events and evaluation of the time interval between them. The device has been constructed using commercially available components. The experimental results proved the concept. We have assessed the single-shot time interval measurement precision of 1.3 ps rms that corresponds to the time of arrival precision of 0.9 ps rms in each channel. The temperature drift of the measured time interval on temperature is lower than 0.5 ps/K, and the long term stability is better than +/-0.2 ps/h. These are to our knowledge the best values reported for the time interval measurement device. The results are in good agreement with the error budget based on the theoretical analysis.
Induced Abortions and the Risk of Preeclampsia Among Nulliparous Women
Parker, Samantha E.; Gissler, Mika; Ananth, Cande V.; Werler, Martha M.
2015-01-01
Induced abortion (IA) has been associated with a lower risk of preeclampsia among nulliparous women, but it remains unclear whether this association differs by method (either surgical or medical) or timing of IA. We performed a nested case-control study of 12,650 preeclampsia cases and 50,600 matched control deliveries identified in the Medical Birth Register of Finland from 1996 to 2010. Data on number, method, and timing of IAs were obtained through a linkage with the Registry of Induced Abortions. Odds ratios and 95% confidence intervals were calculated. Overall, prior IA was associated with a lower risk of preeclampsia, with odds ratios of 0.9 (95% confidence interval (CI): 0.9, 1.0) for 1 prior IA and 0.7 (95% CI: 0.5, 1.0) for 3 or more IAs. Differences in the associations between IA and preeclampsia by timing and method of IA were small, with odds ratios of 0.8 (95% CI: 0.6, 1.1) for late (≥12 gestation weeks) surgical abortion and 0.9 (95% CI: 0.7, 1.2) for late medical abortion. There was no association between IA in combination with a history of spontaneous abortion and risk of preeclampsia. In conclusion, prior IA only was associated with a slight reduction in the risk of preeclampsia. PMID:26377957
Influence of an acidic beverage (Coca-Cola) on the pharmacokinetics of phenytoin in healthy rabbits.
Kondal, A; Garg, S K
2003-12-01
This study was carried out to evaluate the influence of an acidic beverage (Coca-Cola) on the pharmacokinetics of phenytoin in rabbits. In a cross-over study, phenytoin was given orally at a dose of 30 mg/kg and blood samples were taken at different intervals from 0-24 h. After a washout period of 7 days, Coca-Cola (5 ml/kg) was administered in combination with phenytoin (30 mg/kg) and blood samples were taken at various time intervals from 0-24 h. The same rabbits continued to receive Coca-Cola (5 ml/kg) for another 7 days. On the 8th day, Coca-Cola (5 ml/kg) in combination with phenytoin (30 mg/kg) was administered and blood samples were taken at similar intervals. Plasma was separated and assayed for phenytoin by high performance liquid chromatography (HPLC) and various pharmacokinetic parameters were calculated. It was concluded that an acidic beverage (Coca-Cola) increases the extent of absorption of phenytoin by significantly increasing the Cmax and AUC(o-á) of phenytoin. These results warrant the reduction of phenytoin dose when administered in combination with Coca-Cola to avoid any toxicity. (c) 2003 Prous Science
Interval Timing Accuracy and Scalar Timing in C57BL/6 Mice
Buhusi, Catalin V.; Aziz, Dyana; Winslow, David; Carter, Rickey E.; Swearingen, Joshua E.; Buhusi, Mona C.
2010-01-01
In many species, interval timing behavior is accurate—appropriate estimated durations—and scalar—errors vary linearly with estimated durations. While accuracy has been previously examined, scalar timing has not been yet clearly demonstrated in house mice (Mus musculus), raising concerns about mouse models of human disease. We estimated timing accuracy and precision in C57BL/6 mice, the most used background strain for genetic models of human disease, in a peak-interval procedure with multiple intervals. Both when timing two intervals (Experiment 1) or three intervals (Experiment 2), C57BL/6 mice demonstrated varying degrees of timing accuracy. Importantly, both at individual and group level, their precision varied linearly with the subjective estimated duration. Further evidence for scalar timing was obtained using an intraclass correlation statistic. This is the first report of consistent, reliable scalar timing in a sizable sample of house mice, thus validating the PI procedure as a valuable technique, the intraclass correlation statistic as a powerful test of the scalar property, and the C57BL/6 strain as a suitable background for behavioral investigations of genetically engineered mice modeling disorders of interval timing. PMID:19824777
The Association Between Maternal Age and Cerebral Palsy Risk Factors.
Schneider, Rilla E; Ng, Pamela; Zhang, Xun; Andersen, John; Buckley, David; Fehlings, Darcy; Kirton, Adam; Wood, Ellen; van Rensburg, Esias; Shevell, Michael I; Oskoui, Maryam
2018-05-01
Advanced maternal age is associated with higher frequencies of antenatal and perinatal conditions, as well as a higher risk of cerebral palsy in offspring. We explore the association between maternal age and specific cerebral palsy risk factors. Data were extracted from the Canadian Cerebral Palsy Registry. Maternal age was categorized as ≥35 years of age and less than 20 years of age at the time of birth. Chi-square and multivariate logistic regressions were performed to calculate odds ratios and their 95% confidence intervals. The final sample consisted of 1391 children with cerebral palsy, with 19% of children having mothers aged 35 or older and 4% of children having mothers below the age of 20. Univariate analyses showed that mothers aged 35 or older were more likely to have gestational diabetes (odds ratio 1.9, 95% confidence interval 1.3 to 2.8), to have a history of miscarriage (odds ratio 1.8, 95% confidence interval 1.3 to 2.4), to have undergone fertility treatments (odds ratio 2.4, 95% confidence interval 1.5 to 3.9), and to have delivered by Caesarean section (odds ratio 1.6, 95% confidence interval 1.2 to 2.2). These findings were supported by multivariate analyses. Children with mothers below the age of 20 were more likely to have a congenital malformation (odds ratio 2.4, 95% confidence interval 1.4 to 4.2), which is also supported by multivariate analysis. The risk factor profiles of children with cerebral palsy vary by maternal age. Future studies are warranted to further our understanding of the compound causal pathways leading to cerebral palsy and the observed greater prevalence of cerebral palsy with increasing maternal age. Copyright © 2018 Elsevier Inc. All rights reserved.
Comparison of oncology drug approval between Health Canada and the US Food and Drug Administration.
Ezeife, Doreen A; Truong, Tony H; Heng, Daniel Y C; Bourque, Sylvie; Welch, Stephen A; Tang, Patricia A
2015-05-15
The drug approval timeline is a lengthy process that often varies between countries. The objective of this study was to delineate the Canadian drug approval timeline for oncology drugs and to compare the time to drug approval between Health Canada (HC) and the US Food and Drug Administration (FDA). In total, 54 antineoplastic drugs that were approved by the FDA between 1989 and 2012 were reviewed. For each drug, the following milestones were determined: the dates of submission and approval for both the FDA and HC and the dates of availability on provincial drug formularies in Canadian provinces and territories. The time intervals between the aforementioned milestones were calculated. Of 54 FDA-approved drugs, 49 drugs were approved by HC at the time of the current study. The median time from submission to approval was 9 months (interquartile range [IQR], 6-14.5 months) for the FDA and 12 months (IQR, 10-21.1 months) for HC (P < .0006). The time from HC approval to the placement of a drug on a provincial drug formulary was a median of 16.7 months (IQR, 5.9-27.2 months), and there was no interprovincial variability among the 5 Canadian provinces that were analyzed (P = .5). The time from HC submission to HC approval takes 3 months longer than the same time interval for the FDA. To the authors' knowledge, this is the first documentation of the time required to bring an oncology drug from HC submission to placement on a provincial drug formulary. © 2015 American Cancer Society.
de Vries, W; Wieggers, H J J; Brus, D J
2010-08-05
Element fluxes through forest ecosystems are generally based on measurements of concentrations in soil solution at regular time intervals at plot locations sampled in a regular grid. Here we present spatially averaged annual element leaching fluxes in three Dutch forest monitoring plots using a new sampling strategy in which both sampling locations and sampling times are selected by probability sampling. Locations were selected by stratified random sampling with compact geographical blocks of equal surface area as strata. In each sampling round, six composite soil solution samples were collected, consisting of five aliquots, one per stratum. The plot-mean concentration was estimated by linear regression, so that the bias due to one or more strata being not represented in the composite samples is eliminated. The sampling times were selected in such a way that the cumulative precipitation surplus of the time interval between two consecutive sampling times was constant, using an estimated precipitation surplus averaged over the past 30 years. The spatially averaged annual leaching flux was estimated by using the modeled daily water flux as an ancillary variable. An important advantage of the new method is that the uncertainty in the estimated annual leaching fluxes due to spatial and temporal variation and resulting sampling errors can be quantified. Results of this new method were compared with the reference approach in which daily leaching fluxes were calculated by multiplying daily interpolated element concentrations with daily water fluxes and then aggregated to a year. Results show that the annual fluxes calculated with the reference method for the period 2003-2005, including all plots, elements and depths, lies only in 53% of the cases within the range of the average +/-2 times the standard error of the new method. Despite the differences in results, both methods indicate comparable N retention and strong Al mobilization in all plots, with Al leaching being nearly equal to the leaching of SO(4) and NO(3) with fluxes expressed in mol(c) ha(-1) yr(-1). This illustrates that Al release, which is the clearest signal of soil acidification, is mainly due to the external input of SO(4) and NO(3).
Measurement of cardiac output from dynamic pulmonary circulation time CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yee, Seonghwan, E-mail: Seonghwan.Yee@Beaumont.edu; Scalzetti, Ernest M.
Purpose: To introduce a method of estimating cardiac output from the dynamic pulmonary circulation time CT that is primarily used to determine the optimal time window of CT pulmonary angiography (CTPA). Methods: Dynamic pulmonary circulation time CT series, acquired for eight patients, were retrospectively analyzed. The dynamic CT series was acquired, prior to the main CTPA, in cine mode (1 frame/s) for a single slice at the level of the main pulmonary artery covering the cross sections of ascending aorta (AA) and descending aorta (DA) during the infusion of iodinated contrast. The time series of contrast changes obtained for DA,more » which is the downstream of AA, was assumed to be related to the time series for AA by the convolution with a delay function. The delay time constant in the delay function, representing the average time interval between the cross sections of AA and DA, was determined by least square error fitting between the convoluted AA time series and the DA time series. The cardiac output was then calculated by dividing the volume of the aortic arch between the cross sections of AA and DA (estimated from the single slice CT image) by the average time interval, and multiplying the result by a correction factor. Results: The mean cardiac output value for the six patients was 5.11 (l/min) (with a standard deviation of 1.57 l/min), which is in good agreement with the literature value; the data for the other two patients were too noisy for processing. Conclusions: The dynamic single-slice pulmonary circulation time CT series also can be used to estimate cardiac output.« less
NASA Astrophysics Data System (ADS)
Gordon, Devin A.; DeNoyer, Lin; Meyer, Corey W.; Sweet, Noah W.; Burns, David M.; Bruckman, Laura S.; French, Roger H.
2017-08-01
Poly(ethylene-terephthalate) (PET) film is widely used in photovoltaic module backsheets for its dielectric break- down strength, and in applications requiring high optical clarity for its high transmission in the visible region. However, PET degrades and loses optical clarity under exposure to ultraviolet (UV) irradiance, heat, and moisture. Stabilizers are often included in PET formulation to increase its longevity; however, even these are subject to degradation and further reduce optical clarity. To study the weathering induced changes in the optical properties in PET films, samples of a UV-stabilized grade of PET were exposed to heat, moisture, and UV irradiance as prescribed by ASTM-G154 Cycle 4 for 168 hour time intervals. UV-Vis reflection and transmission spectra were collected via Multi-Angle, Polarization-Dependent, Reflection, Transmission, and Scattering (MaPd:RTS) spectroscopy after each exposure interval. The resulting spectra were used to calculate the complex index of refraction throughout the UV-Vis spectral region via an iterative optimization process based upon the Fresnel equations. The index of refraction and extinction coefficient were found to vary throughout the UV-Vis region with time under exposure. The spectra were also used to investigate changes in light scattering behavior with increasing exposure time. The intensity of scattered light was found to increase at higher angles with time under exposure.
NASA Astrophysics Data System (ADS)
Forsyth, C.; Rae, I. J.; Mann, I. R.; Pakhotin, I. P.
2017-03-01
Field-aligned currents (FACs) are a fundamental component of coupled solar wind-magnetosphere-ionosphere. By assuming that FACs can be approximated by stationary infinite current sheets that do not change on the spacecraft crossing time, single-spacecraft magnetic field measurements can be used to estimate the currents flowing in space. By combining data from multiple spacecraft on similar orbits, these stationarity assumptions can be tested. In this technical report, we present a new technique that combines cross correlation and linear fitting of multiple spacecraft measurements to determine the reliability of the FAC estimates. We show that this technique can identify those intervals in which the currents estimated from single-spacecraft techniques are both well correlated and have similar amplitudes, thus meeting the spatial and temporal stationarity requirements. Using data from European Space Agency's Swarm mission from 2014 to 2015, we show that larger-scale currents (>450 km) are well correlated and have a one-to-one fit up to 50% of the time, whereas small-scale (<50 km) currents show similar amplitudes only 1% of the time despite there being a good correlation 18% of the time. It is thus imperative to examine both the correlation and amplitude of the calculated FACs in order to assess both the validity of the underlying assumptions and hence ultimately the reliability of such single-spacecraft FAC estimates.
Levonorgestrel release rates over 5 years with the Liletta® 52-mg intrauterine system.
Creinin, Mitchell D; Jansen, Rolf; Starr, Robert M; Gobburu, Joga; Gopalakrishnan, Mathangi; Olariu, Andrea
2016-10-01
To understand the potential duration of action for Liletta®, we conducted this study to estimate levonorgestrel (LNG) release rates over approximately 5½years of product use. Clinical sites in the U.S. Phase 3 study of Liletta collected the LNG intrauterine systems (IUSs) from women who discontinued the study. We randomly selected samples within 90-day intervals after discontinuation of IUS use through 900days (approximately 2.5years) and 180-day intervals for the remaining duration through 5.4years (1980days) to evaluate residual LNG content. We also performed an initial LNG content analysis using 10 randomly selected samples from a single lot. We calculated the average ex vivo release rate using the residual LNG content over the duration of the analysis. We analyzed 64 samples within 90-day intervals (range 6-10 samples per interval) through 900days and 36 samples within 180-day intervals (6 samples per interval) for the remaining duration. The initial content analysis averaged 52.0±1.8mg. We calculated an average initial release rate of 19.5mcg/day that decreased to 17.0, 14.8, 12.9, 11.3 and 9.8mcg/day after 1, 2, 3, 4 and 5years, respectively. The 5-year average release rate is 14.7mcg/day. The estimated initial LNG release rate and gradual decay of the estimated release rate are consistent with the target design and function of the product. The calculated LNG content and release rate curves support the continued evaluation of Liletta as a contraceptive for 5 or more years of use. Liletta LNG content and release rates are comparable to published data for another LNG 52-mg IUS. The release rate at 5years is more than double the published release rate at 3years with an LNG 13.5-mg IUS, suggesting continued efficacy of Liletta beyond 5years. Copyright © 2016 Elsevier Inc. All rights reserved.
Smith, P; Linscott, L L; Vadivelu, S; Zhang, B; Leach, J L
2016-05-01
Widening of the occipital condyle-C1 interval is the most specific and sensitive means of detecting atlanto-occipital dislocation. Recent studies attempting to define normal measurements of the condyle-C1 interval in children have varied substantially. This study was performed to test the null hypothesis that condyle-C1 interval morphology and joint measurements do not change as a function of age. Imaging review of subjects undergoing CT of the upper cervical spine for reasons unrelated to trauma or developmental abnormality was performed. Four equidistant measurements were obtained for each bilateral condyle-C1 interval on sagittal and coronal images. The cohort was divided into 7 age groups to calculate the mean, SD, and 95% CIs for the average condyle-C1 interval in both planes. The prevalence of a medial occipital condyle notch was calculated. Two hundred forty-eight joints were measured in 124 subjects with an age range of 2 days to 22 years. The condyle-C1 interval varies substantially by age. Average coronal measurements are larger and more variable than sagittal measurements. The medial occipital condyle notch is most prevalent from 1 to 12 years and is uncommon in older adolescents and young adults. The condyle-C1 interval increases during the first several years of life, is largest in the 2- to 4-year age range, and then decreases through late childhood and adolescence. A single threshold value to detect atlanto-occipital dissociation may not be sensitive and specific for all age groups. Application of this normative data to documented cases of atlanto-occipital injury is needed to determine clinical utility. © 2016 by American Journal of Neuroradiology.
VARIABLE TIME-INTERVAL GENERATOR
Gross, J.E.
1959-10-31
This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.
NASA Astrophysics Data System (ADS)
Fereidonnejad, R.; Sadeghi, H.; Ghambari, M.
2018-03-01
In this work, the effect of multi-phonon excitation on heavy-ion fusion reactions has been studied and fusion barrier distributions of energy intervals near and below the Coulomb barrier have been studied for 16,17,18O + 16O reactions. The structure and deformation of nuclear projectiles have been studied. Given the adaptation of computations to experimental data, our calculations predict the behavior of reactions in intervals of energy in which experimental measurements are not available. In addition the S-factor for these reactions has been calculated. The results showed that the structure and deformation of a nuclear projectile are important factors. The S-factor, obtained in the coupled-channel calculations for the {}^{16}O + {}^{16}O, {}^{17}O +{}^{16}O and {}^{18}O +{}^{16}O reactions, showed good agreement with the experimental data and had a maximum value at an energy near 5, 4.5 and 4 MeV, respectively.
Krall, Scott P; Cornelius, Angela P; Addison, J Bruce
2014-03-01
To analyze the correlation between the many different emergency department (ED) treatment metric intervals and determine if the metrics directly impacted by the physician correlate to the "door to room" interval in an ED (interval determined by ED bed availability). Our null hypothesis was that the cause of the variation in delay to receiving a room was multifactorial and does not correlate to any one metric interval. We collected daily interval averages from the ED information system, Meditech©. Patient flow metrics were collected on a 24-hour basis. We analyzed the relationship between the time intervals that make up an ED visit and the "arrival to room" interval using simple correlation (Pearson Correlation coefficients). Summary statistics of industry standard metrics were also done by dividing the intervals into 2 groups, based on the average ED length of stay (LOS) from the National Hospital Ambulatory Medical Care Survey: 2008 Emergency Department Summary. Simple correlation analysis showed that the doctor-to-discharge time interval had no correlation to the interval of "door to room (waiting room time)", correlation coefficient (CC) (CC=0.000, p=0.96). "Room to doctor" had a low correlation to "door to room" CC=0.143, while "decision to admitted patients departing the ED time" had a moderate correlation of 0.29 (p <0.001). "New arrivals" (daily patient census) had a strong correlation to longer "door to room" times, 0.657, p<0.001. The "door to discharge" times had a very strong correlation CC=0.804 (p<0.001), to the extended "door to room" time. Physician-dependent intervals had minimal correlation to the variation in arrival to room time. The "door to room" interval was a significant component to the variation in "door to discharge" i.e. LOS. The hospital-influenced "admit decision to hospital bed" i.e. hospital inpatient capacity, interval had a correlation to delayed "door to room" time. The other major factor affecting department bed availability was the "total patients per day." The correlation to the increasing "door to room" time also reflects the effect of availability of ED resources (beds) on the patient evaluation time. The time that it took for a patient to receive a room appeared more dependent on the system resources, for example, beds in the ED, as well as in the hospital, than on the physician.
Measurements of serum non-ceruloplasmin copper by a direct fluorescent method specific to Cu(II).
Squitti, Rosanna; Siotto, Mariacristina; Cassetta, Emanuele; El Idrissi, Imane Ghafir; Colabufo, Nicola A
2017-08-28
Meta-analyses indicated the breakdown of copper homeostasis in the sporadic form of Alzheimer's disease (AD), comprising copper decreases within the brain and copper increases in the blood and the pool not bound to ceruloplasmin (non-Cp Cu, also known in the literature as "free" copper). The calculated non-Cp Cu (Walshe's) index has many limitations. A direct fluorescent method for non-Cp Cu detection has been developed and data are presented herein. The study included samples from 147 healthy subjects, 36 stable mild cognitive impairment (MCI) and 89 AD patients, who were tested for non-Cp Cu through the direct method, total serum copper, ceruloplasmin concentration and o-dianisidine ceruloplasmin activity. The indirect non-Cp Cu Walshe's index was also calculated. The direct method was linear (0.9-5.9 μM), precise (within-laboratory coefficient variation of 9.7% for low and 7.1% for high measurements), and had a good recovery. A reference interval (0-1.9 μM) was determined parametrically in 147 healthy controls (27-84 years old). The variation of non-Cp Cu was evaluated according to age and sex. Non-Cp Cu was 1.5 times higher in AD patients (regarding the upper value of the reference interval) than in healthy controls. Healthy, MCI and AD subjects were differentiated through the direct non-Cp Cu method [areas under the curve (AUC)=0.755]. Considering a 95% specificity and a 1.91 μmol/L cut-off, the sensitivity was 48.3% (confidence interval 95%: 38%-58%). The likelihood ratio (LR) was 9.94 for positive test results (LR+) and 0.54 for negative test result (LR-). The direct fluorescent test reliably and accurately measures non-Cp Cu, thereby determining the probability of having AD.
Abreau, Kerstin; Callan, Christine; Kottaiyan, Ranjini; Zhang, Aizhong; Yoon, Geunyoung; Aquavella, James V; Zavislan, James; Hindman, Holly B
2016-01-01
To compare the temperatures of the ocular surface, eyelid, and periorbital skin in normal eyes with Sjögren's syndrome (SS) eyes, evaporative dry eyes (EDE), and aqueous deficient dry eyes (ADDE). 10 eyes were analyzed in each age-matched group (normal, SS, EDE, and ADDE). A noninvasive infrared thermal camera captured two-dimensional images in three regions of interest (ROI) in each of three areas: the ocular surface, the upper eyelid, and the periorbital skin within a controlled environmental chamber. Mean temperatures in each ROI were calculated from the videos. Ocular surface time-segmented cooling rates were calculated over a 5-s blink interval. Relative to normal eyes, dry eyes had lower initial central OSTs (SS -0.71°C, EDE -0.55°C, ADDE -0.95°C, KW P<.0001) and lower central upper lid temperatures (SS -0.24°C, ADDE -0.51°C, and EDE -0.54°C, KW P<.0001). ADDE eyes had the lowest initial central OST (P<.0001), while EDE eyes had the lowest central lid temperature and lower periorbital temperatures (P<.0001). Over the 5-s interblink interval, the greatest rate of temperature loss occurred following eyelid opening, but varied by group (normals -0.52, SS -0.73, EDE -0.63, and ADDE -0.75°C/s). The ADDE group also had the most substantial heat loss over the 5-s interblink interval (-0.97°C). Differences in OST may be related to thermal differences in lids and periorbita along with an altered tear film. Thermography of the ocular surface, lids, and surrounding tissues may help to differentiate between different etiologies of dry eye. Copyright © 2016 Elsevier Inc. All rights reserved.
Intact interval timing in circadian CLOCK mutants.
Cordes, Sara; Gallistel, C R
2008-08-28
While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval-timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/- and -/- mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing.
Evaluation of heart rate variability indices using a real-time handheld remote ECG monitor.
Singh, Swaroop S; Carlson, Barbara W; Hsiao, Henry S
2007-12-01
Studies on retrospective electrocardiogram (ECG) recordings of patients during cardiac arrest have shown significant changes in heart rate variability (HRV) indices prior to the onset of cardiac arrhythmia. The early detection of these changes in HRV indices increases the chances for a successful medical intervention by increasing the response time window. A portable, handheld remote ECG monitor designed in this research detects the QRS complex and calculates short-term HRV indices in real-time. The QRS detection of the ECG recordings of subjects from the MIT-Arrhythmia database yielded a mean sensitivity of 99.34% and a specificity of 99.31%. ECG recordings from normal subjects and subjects with congestive heart failure were used to identify the differences in HRV indices. An increase in heart rate, high-frequency spectral power (HFP), total spectral power, the ratio of HFP to low-frequency spectral power (LFP), and a decrease in root mean square sum of RR differences were observed. No difference was found on comparison of the standard deviation of normal to normal interval between adjacent R-waves, LFP, and very-low-frequency spectral power. Based on these, additional analytical calculations could be made to provide early warnings of impending cardiac conditions.
Characterization factors for thermal pollution in freshwater aquatic environments.
Verones, Francesca; Hanafiah, Marlia Mohd; Pfister, Stephan; Huijbregts, Mark A J; Pelletier, Gregory J; Koehler, Annette
2010-12-15
To date the impact of thermal emissions has not been addressed in life cycle assessment despite the narrow thermal tolerance of most aquatic species. A method to derive characterization factors for the impact of cooling water discharges on aquatic ecosystems was developed which uses space and time explicit integration of fate and effects of water temperature changes. The fate factor is calculated with a 1-dimensional steady-state model and reflects the residence time of heat emissions in the river. The effect factor specifies the loss of species diversity per unit of temperature increase and is based on a species sensitivity distribution of temperature tolerance intervals for various aquatic species. As an example, time explicit characterization factors were calculated for the cooling water discharge of a nuclear power plant in Switzerland, quantifying the impact on aquatic ecosystems of the rivers Aare and Rhine. The relative importance of the impact of these cooling water discharges was compared with other impacts in life cycle assessment. We found that thermal emissions are relevant for aquatic ecosystems compared to other stressors, such as chemicals and nutrients. For the case of nuclear electricity investigated, thermal emissions contribute between 3% and over 90% to Ecosystem Quality damage.
Mandel, Micha; Gauthier, Susan A; Guttmann, Charles R G; Weiner, Howard L; Betensky, Rebecca A
2007-12-01
The expanded disability status scale (EDSS) is an ordinal score that measures progression in multiple sclerosis (MS). Progression is defined as reaching EDSS of a certain level (absolute progression) or increasing of one point of EDSS (relative progression). Survival methods for time to progression are not adequate for such data since they do not exploit the EDSS level at the end of follow-up. Instead, we suggest a Markov transitional model applicable for repeated categorical or ordinal data. This approach enables derivation of covariate-specific survival curves, obtained after estimation of the regression coefficients and manipulations of the resulting transition matrix. Large sample theory and resampling methods are employed to derive pointwise confidence intervals, which perform well in simulation. Methods for generating survival curves for time to EDSS of a certain level, time to increase of EDSS of at least one point, and time to two consecutive visits with EDSS greater than three are described explicitly. The regression models described are easily implemented using standard software packages. Survival curves are obtained from the regression results using packages that support simple matrix calculation. We present and demonstrate our method on data collected at the Partners MS center in Boston, MA. We apply our approach to progression defined by time to two consecutive visits with EDSS greater than three, and calculate crude (without covariates) and covariate-specific curves.
NASA Astrophysics Data System (ADS)
Strzałkowski, Piotr; Ścigała, Roman; Szafulera, Katarzyna
2018-04-01
Some problems have been discussed, connected with performing predictions of post-mining terrain deformations. Especially problems occur with the summation of horizontal strain over long time intervals as well as predictions of linear discontinuous deformations. Of great importance in recent years is the problem of taking into account transient values of deformations associated with the development of extraction field. The exemplary analysis has been presented of planned extraction influences on two characteristic locations of building structure. The proposal has been shown of calculations with using transient deformation model allowing to describe the influence of extraction advance influence on the value of coefficient of extraction rate c (time factor), according to own original empirical formula.
Continuous Blood Pressure Monitoring in Daily Life
NASA Astrophysics Data System (ADS)
Lopez, Guillaume; Shuzo, Masaki; Ushida, Hiroyuki; Hidaka, Keita; Yanagimoto, Shintaro; Imai, Yasushi; Kosaka, Akio; Delaunay, Jean-Jacques; Yamada, Ichiro
Continuous monitoring of blood pressure in daily life could improve early detection of cardiovascular disorders, as well as promoting healthcare. Conventional ambulatory blood pressure monitoring (ABPM) equipment can measure blood pressure at regular intervals for 24 hours, but is limited by long measuring time, low sampling rate, and constrained measuring posture. In this paper, we demonstrate a new method for continuous real-time measurement of blood pressure during daily activities. Our method is based on blood pressure estimation from pulse wave velocity (PWV) calculation, which formula we improved to take into account changes in the inner diameter of blood vessels. Blood pressure estimation results using our new method showed a greater precision of measured data during exercise, and a better accuracy than the conventional PWV method.
The Anaesthetic-ECT Time Interval in Electroconvulsive Therapy Practice--Is It Time to Time?
Gálvez, Verònica; Hadzi-Pavlovic, Dusan; Wark, Harry; Harper, Simon; Leyden, John; Loo, Colleen K
2016-01-01
Because most common intravenous anaesthetics used in ECT have anticonvulsant properties, their plasma-brain concentration at the time of seizure induction might affect seizure expression. The quality of ECT seizure expression has been repeatedly associated with efficacy outcomes. The time interval between the anaesthetic bolus injection and the ECT stimulus (anaesthetic-ECT time interval) will determine the anaesthetic plasma-brain concentration when the ECT stimulus is administered. The aim of this study was to examine the effect of the anaesthetic-ECT time interval on ECT seizure quality and duration. The anaesthetic-ECT time interval was recorded in 771 ECT sessions (84 patients). Right unilateral brief pulse ECT was applied. Anaesthesia given was propofol (1-2 mg/kg) and succinylcholine (0.5-1.0 mg/kg). Seizure quality indices (slow wave onset, amplitude, regularity, stereotypy and post-ictal suppression) and duration were rated through a structured rating scale by a single blinded trained rater. Linear Mixed Effects Models analysed the effect of the anaesthetic-ECT time interval on seizure quality indices, controlling for propofol dose (mg), ECT charge (mC), ECT session number, days between ECT, age (years), initial seizure threshold (mC) and concurrent medication. Longer anaesthetic-ECT time intervals lead to significantly higher quality seizures (p < 0.001 for amplitude, regularity, stereotypy and post-ictal suppression). These results suggest that the anaesthetic-ECT time interval is an important factor to consider in ECT practice. This time interval should be extended to as long as practically possible to facilitate the production of better quality seizures. Close collaboration between the anaesthetist and the psychiatrist is essential. Copyright © 2015 Elsevier Inc. All rights reserved.
Dexter, Franklin; Epstein, Richard H; Lee, John D; Ledolter, Johannes
2009-03-01
Operating room (OR) whiteboards (status displays) communicate times remaining for ongoing cases to perioperative stakeholders (e.g., postanesthesia care unit, anesthesiologists, holding area, and control desks). Usually, scheduled end times are shown for each OR. However, these displays are inaccurate for predicting the time that remains in a case. Once a case scheduled for 2 h has been on-going for 1.5 h, the median time remaining is not 0.5 h but longer, and the amount longer differs among procedures. We derived the conditional Bayesian lower prediction bound of a case's duration, conditional on the minutes of elapsed OR time. Our derivations make use of the posterior predictive distribution of OR times following an exponential of a scaled Student t distribution that depends on the scheduled OR time and several parameters calculated from historical case duration data. The statistical method was implemented using Structured Query Language (SQL) running on the anesthesia information management system (AIMS) database server. In addition, AIMS workstations were sent instant messages displaying a pop-up dialog box asking for anesthesia providers' estimates for remaining times. The dialogs caused negotiated interruptions (i.e., the anesthesia provider could reply immediately, keep the dialog displayed, or defer response). There were no announcements, education, or efforts to promote buy-in. After a case had been in the OR longer than scheduled, the median remaining OR time for the case changes little over time (e.g., 35 min left at 2:30 pm and also at 3:00 pm while the case was still on-going). However, the remaining time differs substantially among surgeons and scheduled procedure(s) (16 min longer [10th percentile], 35 min [50th], and 86 min [90th]). We therefore implemented an automatic method to estimate the times remaining in cases. The system was operational for >119 of each day's 120 5-min intervals. When instant message dialogs appearing on AIMS workstations were used to elicit estimates of times remaining from anesthesia providers, acknowledgment was on average within 1.2 min (95% confidence interval [CI] 1.1-1.3 min). The 90th percentile of latencies was 6.5 min (CI: 4.4-7.0 min). For cases taking nearly as long as or longer than scheduled, each 1 min progression of OR time reduces the median time remaining in a case by <1 min. We implemented automated calculation of times remaining for every case at a 29 OR hospital.
A new NASA/MSFC mission analysis global cloud cover data base
NASA Technical Reports Server (NTRS)
Brown, S. C.; Jeffries, W. R., III
1985-01-01
A global cloud cover data set, derived from the USAF 3D NEPH Analysis, was developed for use in climate studies and for Earth viewing applications. This data set contains a single parameter - total sky cover - separated in time by 3 or 6 hr intervals and in space by approximately 50 n.mi. Cloud cover amount is recorded for each grid point (of a square grid) by a single alphanumeric character representing each 5 percent increment of sky cover. The data are arranged in both quarterly and monthly formats. The data base currently provides daily, 3-hr observed total sky cover for the Northern Hemisphere from 1972 through 1977 less 1976. For the Southern Hemisphere, there are data at 6-hr intervals for 1976 through 1978 and at 3-hr intervals for 1979 and 1980. More years of data are being added. To validate the data base, the percent frequency of or = 0.3 and or = 0.8 cloud cover was compared with ground observed cloud amounts at several locations with generally good agreement. Mean or other desired cloud amounts can be calculated for any time period and any size area from a single grid point to a hemisphere. The data base is especially useful in evaluating the consequence of cloud cover on Earth viewing space missions. The temporal and spatial frequency of the data allow simulations that closely approximate any projected viewing mission. No adjustments are required to account for cloud continuity.
Origin of orbital periods in the sedimentary relative paleointensity records
NASA Astrophysics Data System (ADS)
Xuan, Chuang; Channell, James E. T.
2008-08-01
Orbital cycles with 100 kyr and/or 41 kyr periods, detected in some sedimentary normalized remanence (relative paleointensity) records by power spectral analysis or wavelet analysis, have been attributed either to orbital forcing of the geodynamo, or to lithologic contamination. In this study, local wavelet power spectra (LWPS) with significance tests have been calculated for seven relative paleointensity (RPI) records from different regions of the world. The results indicate that orbital periods (100 kyr and/or 41 kyr) are significant in some RPI records during certain time intervals, and are not significant in others. Time intervals where orbital periods are significant are not consistent among the RPI records, implying that orbital periods in these RPI records may not have a common origin such as orbital forcing on the geodynamo. Cross-wavelet power spectra (|XWT|) and squared wavelet coherence (WTC) between RPI records and orbital parameters further indicate that common power exists at orbital periods but is not significantly coherent, and exhibits variable phase relationships, implying that orbital periods in RPI records are not caused directly by orbital forcing. Similar analyses for RPI records and benthic oxygen isotope records from the same sites show significant coherence and constant in-phase relationships during time intervals where orbital periods were significant in the RPI records, indicating that orbital periods in the RPI records are most likely due to climatic 'contamination'. Although common power exists at orbital periods for RPI records and their normalizers with significant coherence during certain time intervals, phase relationships imply that 'contamination' (at orbital periods) is not directly due to the normalizers. Orbital periods are also significant in the NRM intensity records, and 'contamination' in RPI records can be attributed to incomplete normalization of the NRM records. Further tests indicate that 'contamination' is apparently not directly related to physical properties such as density or carbonate content, or to the grain size proxy κARM/ κ. However, WTC between RPI records and the grain size proxy ARM/IRM implies that ARM/IRM does reflect the 'contamination' in some RPI records. It appears that orbital periods were introduced into the NRM records (and have not been normalized when calculating RPI records) through magnetite grain size variations reflected in the ARM/IRM grain size proxy. The orbital power in ARM/IRM for some North Atlantic sites is probably derived from bottom-current velocity variations that are orbitally modulated and are related to the vigor of thermohaline circulation and the production of North Atlantic Deep Water (NADW). In the case of ODP Site 983, the orbital power in RPI appears to exhibit a shift from 41-kyr to 100-kyr period at the mid-Pleistocene climate transition (˜750 ka), reinforcing the climatic origin of these orbital periods. RPI records from the Atlantic and Pacific oceans, and RPI records with orbital periods eliminated by band-pass filters, are highly comparable with each other in the time domain, and are coherent and in-phase in time-frequency space, especially at non-orbital periods, indicating that 'contamination', although present (at orbital periods) is not debilitating to these RPI records as a global signal that is primarily of geomagnetic origin.
Teare, J A; Schwark, W S; Shin, S J; Graham, D L
1985-12-01
After a single IV or IM dose of a long-acting oxytetracycline (OTC) preparation, serum concentrations were determined at various times in the ring-necked pheasant, great horned owl, and Amazon parrot. Pharmacokinetic parameters, including serum half-life (t1/2) and apparent volume of distribution (Vd) were calculated from the OTC concentration-time curves for each species and route of administration. Significant differences (P less than 0.05) were found in the t1/2 and Vd parameters between species and routes of administration. Dosage regimens to maintain minimum OTC concentration of 5 micrograms/ml of serum were calculated from the t 1/2 and Vd values obtained, using steady-state pharmacokinetics. In the pheasant, the calculated mean IV dose was 23 mg/kg of body weight every 6 hours, whereas the mean IM dose was 43 mg/kg every 24 hours. The mean IM dose was 16 mg/kg every 24 hours for the owl and 58 mg/kg every 24 hours for the parrot. The small volumes required for treatment, the long-dosing interval obtainable, and the broad spectrum of antimicrobial activity of the long-acting OTC preparation studied offered major advantages over other antibiotics commonly used in treating avian species.
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
Steyer, Andrew J.; Van Vleck, Erik S.
2018-04-13
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steyer, Andrew J.; Van Vleck, Erik S.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Martin, Thomas J.; Grigg, Amanda; Kim, Susy A.; Ririe, Douglas G.; Eisenach, James C.
2014-01-01
Background The 5 choice serial reaction time task (5CSRTT) is commonly used to assess attention in rodents. We sought to develop a variant of the 5CSRTT that would speed training to objective success criteria, and to test whether this variant could determine attention capability in each subject. New Method Fisher 344 rats were trained to perform a variant of the 5CSRTT in which the duration of visual cue presentation (cue duration) was titrated between trials based upon performance. The cue duration was decreased when the subject made a correct response, or increased with incorrect responses or omissions. Additionally, test day challenges were provided consisting of lengthening the intertrial interval and inclusion of a visual distracting stimulus. Results Rats readily titrated the cue duration to less than 1 sec in 25 training sessions or less (mean ± SEM, 22.9 ± 0.7), and the median cue duration (MCD) was calculated as a measure of attention threshold. Increasing the intertrial interval increased premature responses, decreased the number of trials completed, and increased the MCD. Decreasing the intertrial interval and time allotted for consuming the food reward demonstrated that a minimum of 3.5 sec is required for rats to consume two food pellets and successfully attend to the next trial. Visual distraction in the form of a 3 Hz flashing light increased the MCD and both premature and time out responses. Comparison with existing method The titration variant of the 5CSRTT is a useful method that dynamically measures attention threshold across a wide range of subject performance, and significantly decreases the time required for training. Task challenges produce similar effects in the titration method as reported for the classical procedure. Conclusions The titration 5CSRTT method is an efficient training procedure for assessing attention and can be utilized to assess the limit in performance ability across subjects and various schedule manipulations. PMID:25528113
The direct and indirect costs of Dravet Syndrome.
Whittington, Melanie D; Knupp, Kelly G; Vanderveen, Gina; Kim, Chong; Gammaitoni, Arnold; Campbell, Jonathan D
2018-03-01
The objective of this study was to estimate the annual direct and indirect costs associated with Dravet Syndrome (DS). A survey was electronically administered to the caregivers of patients with DS treated at Children's Hospital Colorado. Survey domains included healthcare utilization of the patient with DS and DS caregiver work productivity and activity impairment. Patient healthcare utilization was measured using modified questions from the National Health Interview Survey; caregiver work productivity and activity impairment were measured using modified questions from the Work Productivity and Activity Impairment questionnaire. Direct costs were calculated by multiplying the caregiver-reported healthcare utilization rates by the mean unit cost for each healthcare utilization category. Indirect costs included lost productivity, income loss, and lost leisure time. The indirect costs were a function of caregiver-reported hours spent caregiving and an hourly unit cost. The survey was emailed to 60 DS caregivers, of which 34 (57% response rate) responded. Direct costs on average were $27,276 (95% interval: $15,757, $41,904) per patient with DS. Hospitalizations ($11,565 a year) and in-home medical care visits ($9894 a year) were substantial cost drivers. Additionally, caregivers reported extensive time spent providing care to an individual with DS. This caregiver time resulted in average annual indirect costs of $81,582 (95% interval: $57,253, $110,151), resulting in an average total annual financial burden of $106,378 (95% interval: $78,894, $137,906). Dravet Syndrome results in substantial healthcare utilization, financial burden, and time commitment. Establishing evidence on the financial burden of DS is essential to understanding the overall impact of DS, identifying potential areas for support needs, and assessing the impact of novel treatments as they become available. Based on the study findings, in-home visits, hospitalizations, and lost productivity and leisure time of caregivers are key domains for DS economic evaluations. Future research should extend these estimates to include the potential additional healthcare utilization of the DS caregiver. Copyright © 2018 Elsevier Inc. All rights reserved.
Dry-heat Resistance of Bacillus Subtilis Var. Niger Spores on Mated Surfaces
NASA Technical Reports Server (NTRS)
Simko, G. J.; Devlin, J. D.; Wardle, M. D.
1971-01-01
Bacillus subtilis var. niger spores were placed on the surfaces of test coupons manufactured from typical spacecraft materials including stainless steel, magnesium, titanium, and aluminum. These coupons were then juxtaposed at the inoculated surfaces and subjected to test pressures of 0, 1000, 5000, and 10,000 psi. Tests were conducted in ambient, nitrogen, and helium atmospheres. While under the test pressure condition, the spores were exposed to 125 C for intervals of 5, 10, 20, 50, or 80 min. Survivor data were subjected to a linear regression analysis that calculated decimal reduction times.
NASA Astrophysics Data System (ADS)
Brosche, Peter
Von Wahl was an active member of the group of independent scholars, who were working in the German states within Goethe's time, and who performed astrometric and geodetic observations and calculations. Here we present some cornerstones of his life; longer intervals of it took place in Allstedt south of the Harz and in Halberstadt. Small scientific assets have been preserved at the Universitäts-Sternwarte Bonn. Therein, a lecture on secular variations of the ecliptic is of singular nature.
The ISEE-3 ULEWAT: Flux tape description and heavy ion fluxes 1978-1984. [plasma diagnostics
NASA Technical Reports Server (NTRS)
Mason, G. M.; Klecker, B.
1985-01-01
The ISEE ULEWAT FLUX tapes contain ULEWAT and ISEE pool tape data summarized over relatively long time intervals (1hr) in order to compact the data set into an easily usable size. (Roughly 3 years of data fit onto one 1600 BPI 9-track magnetic tape). In making the tapes, corrections were made to the ULEWAT basic data tapes in order to, remove rate spikes and account for changes in instrument response so that to a large extent instrument fluxes can be calculated easily from the FLUX tapes without further consideration of instrument performance.
NASA Astrophysics Data System (ADS)
Kasinskii, V.; Kasinskaia, L. I.
2005-06-01
The angular velocities of chromosphere and photosphere are calculated for 1987-1990 on the basis of heliographic coordinates of the chromospheric flares and sunspots (Solar Geophysical Data). The time resolution accepted is 0.25 year. The mean equatorial rotations of chromosphere and photosphere practically coincide. However, the differential coefficients in the chromosphere and photosphere, b, have strongly different behaviour. The value bch - bph change regularly from ``+'' sign to ``-'' sign over two-year interval. Thus, the idea of a torsion like oscillations of ``chromosphere-photosphere'' is supported.
Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models
ERIC Educational Resources Information Center
Doebler, Anna; Doebler, Philipp; Holling, Heinz
2013-01-01
The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…
40 CFR 1066.820 - Composite calculations for FTP exhaust emissions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... composite gaseous test results as a mass-weighted value, e [emission]-FTPcomp, in grams per mile using the... UDDS test interval (generally known as bag 1 and bag 2), in grams. D ct = the measured driving distance... emissions determined from the hot-start UDDS test interval in grams. This is the hot-stabilized portion from...
Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size
ERIC Educational Resources Information Center
Shieh, Gwowen
2015-01-01
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
Jungheim, Michael; Busche, Andre; Miller, Simone; Schilling, Nicolas; Schmidt-Thieme, Lars; Ptok, Martin
2016-10-15
After swallowing, the upper esophageal sphincter (UES) needs a certain amount of time to return from maximum pressure to the resting condition. Disturbances of sphincter function not only during the swallowing process but also in this phase of pressure restitution may lead to globus sensation or dysphagia. Since UES pressures do not decrease in a linear or asymptotic manner, it is difficult to determine the exact time when the resting pressure is reached, even when using high resolution manometry (HRM). To overcome this problem a Machine Learning model was established to objectively determine the UES restitution time (RT) and moreover to collect physiological data on sphincter function after swallowing. HRM-data of 15 healthy participants performing 10 swallows each were included. After manual annotation of the RT interval by two swallowing experts, data were transferred to the Machine Learning model, which applied a sequence labeling modeling approach based on logistic regression to learn and objectivize the characteristics of all swallows. Individually computed RT values were then compared with the annotated values. Estimates of the RT were generated by the Machine Learning model for all 150 swallows. When annotated by swallowing experts mean RT of 11.16s±5.7 (SD) and 10.04s±5.74 were determined respectively, compared to model-generated values from 8.91s±3.71 to 10.87s±4.68 depending on model selection. The correlation score for the annotated RT of both examiners was 0.76 and 0.63 to 0.68 for comparison of model predicted values. Restitution time represents an important physiologic swallowing parameter not previously considered in HRM-studies of the UES, especially since disturbances of UES restitution may increase the risk of aspiration. The data presented here show that it takes approximately 9 to 11s for the UES to come to rest after swallowing. Based on maximal RT values, we demonstrate that an interval of 25-30s in between swallows is necessary until the next swallow is initiated. This should be considered in any further HRM-studies designed to evaluate the characteristics of individual swallows. The calculation model enables a quick and reproducible determination of the time it takes for the UES to come to rest after swallowing (RT). The results of the calculation are partially independent of the input of the investigator. Adding more swallows and integrating additional parameters will improve the Machine Leaning model in the future. By applying similar models to other swallowing parameters of the pharynx and UES, such as the relaxation time of the UES or the activity time during swallowing, a complete automatic evaluation of HRM-data of a swallow should be possible. Copyright © 2016 Elsevier Inc. All rights reserved.
Digital implementation of the TF30-P-3 turbofan engine control
NASA Technical Reports Server (NTRS)
Cwynar, D. S.; Batterton, P. G.
1975-01-01
The standard hydromechanical control modes for TF30-P-3 engine were implemented on a digital process control computer. Programming methods are described, and a method is presented to solve stability problems associated with fast response dynamic loops contained within the exhaust nozzle control. A modification of the exhaust nozzle control to provide for either velocity or position servoactuation systems is discussed. Transient response of the digital control was evaluated by tests on a real time hybrid simulation of the TF30-P-3 engine. It is shown that the deadtime produced by the calculation time delay between sampling and final output is more significant to transient response than the effects associated with sampling rate alone. For the main fuel control, extended update and calculation times resulted in a lengthened transient response to throttle bursts from idle to intermediate with an increase in high pressure compressor stall margin. Extremely long update intervals of 250 msec could be achieved without instability. Update extension for the exhaust nozzle control resulted in a delayed response of the afterburner light-off detector and exhaust nozzle overshoot with resulting fan oversuppression. Long update times of 150 msec caused failure of the control due to a false indication by the blowout detector.
NASA Astrophysics Data System (ADS)
Žaknić-Ćatović, Ana; Gough, William A.
2018-04-01
Climatological observing window (COW) is defined as a time frame over which continuous or extreme air temperature measurements are collected. A 24-h time interval, ending at 00UTC or shifted to end at 06UTC, has been associated with difficulties in characterizing daily temperature extrema. A fixed 24-h COW used to obtain the temperature minima leads to potential misidentification due to fragmentation of "nighttime" into two subsequent nighttime periods due to the time discretization interval. The correct identification of air temperature extrema is achievable using a COW that identifies daily minimum over a single nighttime period and maximum over a single daytime period, as determined by sunrise and sunset. Due to a common absence of hourly air temperature observations, the accuracy of the mean temperature estimation is dependent on the accuracy of determination of diurnal air temperature extrema. Qualitative and quantitative criteria were used to examine the impact of the COW on detecting daily air temperature extrema. The timing of the 24-h observing window occasionally affects the determination of daily extrema through a mischaracterization of the diurnal minima and by extension can lead to errors in determining daily mean temperature. Hourly air temperature data for the time period from year 1987 to 2014, obtained from Toronto Buttonville Municipal Airport weather station, were used in analysis of COW impacts on detection of daily temperature extrema and calculation of annual temperature averages based on such extrema.
Inverse analysis and regularisation in conditional source-term estimation modelling
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.
2014-05-01
Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.
NASA Astrophysics Data System (ADS)
Hood, R.; Woodroffe, J. R.; Morley, S.; Aruliah, A. L.
2017-12-01
Using the CHAMP fluxgate magnetometer to calculate field-aligned current (FAC) densities and magnetic latitudes, with SuperMAG ground magnetometers analogously providing ground geomagnetic disturbances (GMD) magnetic perturbations and latitudes, we probe FAC locations and strengths as predictors of GMD locations and strengths. We also study the relationships between solar wind drivers and global magnetospheric activity, and both FACs and GMDs using IMF Bz and the Sym-H index. We present an event study of the 22-29 July 2004 storm time interval, which had particularly large GMDs given its storm intensity. We find no correlation between FAC and GMD magnitudes, perhaps due to CHAMP orbit limitations or ground magnetometer coverage. There is, however, a correlation between IMF Bz and nightside GMD magnitudes, supportive of their generation via tail reconnection. IMF Bz is also correlated with dayside FAC and GMD magnetic latitudes, indicating solar wind as an initial driver. The ring current influence increases during the final storm, with improved correlations between the Sym-H index and both FAC magnetic latitudes and GMD magnitudes. Sym-H index correlations may only be valid for higher intensity storms; a statistical analysis of many storms is needed to verify this.
NASA Astrophysics Data System (ADS)
Amendt, Jens; Krettek, Roman; Zehner, Richard
Necrophagous insects are important in the decomposition of cadavers. The close association between insects and corpses and the use of insects in medicocriminal investigations is the subject of forensic entomology. The present paper reviews the historical background of this discipline, important postmortem processes, and discusses the scientific basis underlying attempts to determine the time interval since death. Using medical techniques, such as the measurement of body temperature or analysing livor and rigor mortis, time since death can only be accurately measured for the first two or three days after death. In contrast, by calculating the age of immature insect stages feeding on a corpse and analysing the necrophagous species present, postmortem intervals from the first day to several weeks can be estimated. These entomological methods may be hampered by difficulties associated with species identification, but modern DNA techniques are contributing to the rapid and authoritative identification of necrophagous insects. Other uses of entomological data include the toxicological examination of necrophagous larvae from a corpse to identify and estimate drugs and toxicants ingested by the person when alive and the proof of possible postmortem manipulations. Forensic entomology may even help in investigations dealing with people who are alive but in need of care, by revealing information about cases of neglect.
Influence of acidic beverage (Coca-Cola) on pharmacokinetics of ibuprofen in healthy rabbits.
Kondal, Amit; Garg, S K
2003-11-01
The study was aimed at determining the effect of Coca-Cola on the pharmacokinetics of ibuprofen in rabbits. In a cross-over study, ibuprofen was given orally in a dose of 56 mg/kg, prepared as 0.5% suspension in carboxymethyl cellulose (CMC) and blood samples (1 ml) were drawn at different time intervals from 0-12 hr. After a washout period of 7 days, Coca-Cola in a dose of (5 ml/kg) was administered along with ibuprofen (56 mg/kg) and blood samples were drawn from 0-12 hr. To these rabbits, 5 ml/kg Coca-Cola was administered once daily for another 7 days. On 8th day, Coca-Cola (5 ml/kg) along with ibuprofen (56 mg/kg), prepared as a suspension was administered and blood samples (1 ml each) were drawn at similar time intervals. Plasma was separated and assayed for ibuprofen by HPLC technique and various pharmacokinetic parameters were calculated. The Cmax and AUC0-alpha of ibuprofen were significantly increased after single and multiple doses of Coca-Cola, thereby indicating increased extent of absorption of ibuprofen. The results warrant the reduction of ibuprofen daily dosage, frequency when administered with Coca-Cola.
The Simulation Realization of Pavement Roughness in the Time Domain
NASA Astrophysics Data System (ADS)
XU, H. L.; He, L.; An, D.
2017-10-01
As the needs for the dynamic study on the vehicle-pavement system and the simulated vibration table test, how to simulate the pavement roughness actually is important guarantee for whether calculation and test can reflect the actual situation or not. Using the power spectral density function, the simulation of pavement roughness can be realized by Fourier inverse transform. The main idea of this method was that the spectrum amplitude and random phase were obtained separately according to the power spectrum, and then the simulation of pavement roughness was obtained in the time domain through the Fourier inverse transform (IFFT). In the process, the sampling interval (Δl) was 0.1m, and the sampling points(N) was 4096, which satisfied the accuracy requirements. Using this method, the simulate results of pavement roughness (A~H grades) were obtain in the time domain.
Data preparation techniques for a perinatal psychiatric study based on linked data.
Xu, Fenglian; Hilder, Lisa; Austin, Marie-Paule; Sullivan, Elizabeth A
2012-06-08
In recent years there has been an increase in the use of population-based linked data. However, there is little literature that describes the method of linked data preparation. This paper describes the method for merging data, calculating the statistical variable (SV), recoding psychiatric diagnoses and summarizing hospital admissions for a perinatal psychiatric study. The data preparation techniques described in this paper are based on linked birth data from the New South Wales (NSW) Midwives Data Collection (MDC), the Register of Congenital Conditions (RCC), the Admitted Patient Data Collection (APDC) and the Pharmaceutical Drugs of Addiction System (PHDAS). The master dataset is the meaningfully linked data which include all or major study data collections. The master dataset can be used to improve the data quality, calculate the SV and can be tailored for different analyses. To identify hospital admissions in the periods before pregnancy, during pregnancy and after birth, a statistical variable of time interval (SVTI) needs to be calculated. The methods and SPSS syntax for building a master dataset, calculating the SVTI, recoding the principal diagnoses of mental illness and summarizing hospital admissions are described. Linked data preparation, including building the master dataset and calculating the SV, can improve data quality and enhance data function.
Intact Interval Timing in Circadian CLOCK Mutants
Cordes, Sara; Gallistel, C. R.
2008-01-01
While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/− and −/− mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing. PMID:18602902
Risk factors of suicide mortality among multiple attempters: A national registry study in Taiwan.
Chen, I-Ming; Liao, Shih-Cheng; Lee, Ming-Been; Wu, Chia-Yi; Lin, Po-Hsien; Chen, Wei J
2016-05-01
Little is known about the risk factors of suicide mortality among multiple attempters. This study aims to investigate the predictors of suicidal mortality in a prospective cohort of attempters in Taiwan, focusing on the time interval and suicide method change between the last two nonfatal attempts. The representative data retrieved from the National Suicide Surveillance System (NSSS) was linked with National Mortality Database to identify the causes of death in multiple attempters during 2006-2008. Cox-proportional hazard models were applied to calculate the hazard ratios for the predictors of suicide. Among the 55,560 attempters, 6485 (11.7%) had survived attempts ranging from one to 11 times; 861 (1.5%) eventually died by suicide. Multiple attempters were characterized by female (OR = 1.56, p < 0.0001), nonrecipient of national aftercare service (OR = 1.62, p < 0.0001), and current contact with mental health services (OR = 3.17, p < 0.0001). Most multiple attempters who survived from hanging (68.1%) and gas poisoning (61.9%) chose the same method in the following fatal episode. Predictors of suicidal death were identified as male, older age (≥ 45 years), shorter interval and not maintaining methods of low lethality in the last two nonfatal attempts. Receipt of nationwide aftercare was associated with lower risk of suicide but the effect was insignificant. The time interval of the last two nonfatal attempts and alteration in the lethality of suicide method were significant factors for completed suicide. Risk assessment involving these two factors may be necessary for multiple attempters in different clinical settings. Effective strategies for suicide prevention emphasizing this high risk population should be developed in the future. Copyright © 2015. Published by Elsevier B.V.