Sample records for time point measures

  1. Performance of various branch-point tolerant phase reconstructors with finite time delays and measurement noise

    NASA Astrophysics Data System (ADS)

    Zetterlind, Virgil E., III; Magee, Eric P.

    2002-06-01

    This study extends branch point tolerant phase reconstructor research to examine the effect of finite time delays and measurement error on system performance. Branch point tolerant phase reconstruction is particularly applicable to atmospheric laser weapon and communication systems, which operate in extended turbulence. We examine the relative performance of a least squares reconstructor, least squares plus hidden phase reconstructor, and a Goldstein branch point reconstructor for various correction time-delays and measurement noise scenarios. Performance is evaluated using a wave-optics simulation that models a 100km atmospheric propagation of a point source beacon to a transmit/receive aperture. Phase-only corrections are then calculated using the various reconstructor algorithms and applied to an outgoing uniform field. Point Strehl is used as the performance metric. Results indicate that while time delays and measurement noise reduce the performance of branch point tolerant reconstructors, these reconstructors can still outperform least squares implementations in many cases. We also show that branch point detection becomes the limiting factor in measurement noise corrupted scenarios.

  2. Comparison of Travel-Time and Amplitude Measurements for Deep-Focusing Time-Distance Helioseismology

    NASA Astrophysics Data System (ADS)

    Pourabdian, Majid; Fournier, Damien; Gizon, Laurent

    2018-04-01

    The purpose of deep-focusing time-distance helioseismology is to construct seismic measurements that have a high sensitivity to the physical conditions at a desired target point in the solar interior. With this technique, pairs of points on the solar surface are chosen such that acoustic ray paths intersect at this target (focus) point. Considering acoustic waves in a homogeneous medium, we compare travel-time and amplitude measurements extracted from the deep-focusing cross-covariance functions. Using a single-scattering approximation, we find that the spatial sensitivity of deep-focusing travel times to sound-speed perturbations is zero at the target location and maximum in a surrounding shell. This is unlike the deep-focusing amplitude measurements, which have maximum sensitivity at the target point. We compare the signal-to-noise ratio for travel-time and amplitude measurements for different types of sound-speed perturbations, under the assumption that noise is solely due to the random excitation of the waves. We find that, for highly localized perturbations in sound speed, the signal-to-noise ratio is higher for amplitude measurements than for travel-time measurements. We conclude that amplitude measurements are a useful complement to travel-time measurements in time-distance helioseismology.

  3. Estimation of Initial and Response Times of Laser Dew-Point Hygrometer by Measurement Simulation

    NASA Astrophysics Data System (ADS)

    Matsumoto, Sigeaki; Toyooka, Satoru

    1995-10-01

    The initial and the response times of the laser dew-point hygrometer were evaluated by measurement simulation. The simulation was based on loop computations of the surface temperature of a plate with dew deposition, the quantity of dew deposited and the intensity of scattered light from the surface at each short interval of measurement. The initial time was defined as the time necessary for the hygrometer to reach a temperature within ±0.5° C of the measured dew point from the start time of measurement, and the response time was also defined for stepwise dew-point changes of +5° C and -5° C. The simulation results are in approximate agreement with the recorded temperature and intensity of scattered light of the hygrometer. The evaluated initial time ranged from 0.3 min to 5 min in the temperature range from 0° C to 60° C, and the response time was also evaluated to be from 0.2 min to 3 min.

  4. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.

    PubMed

    Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang

    2017-01-01

    Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  5. Alignment of time-resolved data from high throughput experiments.

    PubMed

    Abidi, Nada; Franke, Raimo; Findeisen, Peter; Klawonn, Frank

    2016-12-01

    To better understand the dynamics of the underlying processes in cells, it is necessary to take measurements over a time course. Modern high-throughput technologies are often used for this purpose to measure the behavior of cell products like metabolites, peptides, proteins, [Formula: see text]RNA or mRNA at different points in time. Compared to classical time series, the number of time points is usually very limited and the measurements are taken at irregular time intervals. The main reasons for this are the costs of the experiments and the fact that the dynamic behavior usually shows a strong reaction and fast changes shortly after a stimulus and then slowly converges to a certain stable state. Another reason might simply be missing values. It is common to repeat the experiments and to have replicates in order to carry out a more reliable analysis. The ideal assumptions that the initial stimulus really started exactly at the same time for all replicates and that the replicates are perfectly synchronized are seldom satisfied. Therefore, there is a need to first adjust or align the time-resolved data before further analysis is carried out. Dynamic time warping (DTW) is considered as one of the common alignment techniques for time series data with equidistant time points. In this paper, we modified the DTW algorithm so that it can align sequences with measurements at different, non-equidistant time points with large gaps in between. This type of data is usually known as time-resolved data characterized by irregular time intervals between measurements as well as non-identical time points for different replicates. This new algorithm can be easily used to align time-resolved data from high-throughput experiments and to come across existing problems such as time scarcity and existing noise in the measurements. We propose a modified method of DTW to adapt requirements imposed by time-resolved data by use of monotone cubic interpolation splines. Our presented approach provides a nonlinear alignment of two sequences that neither need to have equi-distant time points nor measurements at identical time points. The proposed method is evaluated with artificial as well as real data. The software is available as an R package tra (Time-Resolved data Alignment) which is freely available at: http://public.ostfalia.de/klawonn/tra.zip .

  6. Test-retest reliability of the irrational performance beliefs inventory.

    PubMed

    Turner, M J; Slater, M J; Dixon, J; Miller, A

    2018-02-01

    The irrational performance beliefs inventory (iPBI) was developed to measure irrational beliefs within performance domains such as sport, academia, business, and the military. Past research indicates that the iPBI has good construct, concurrent, and predictive validity, but the test-retest reliability of the iPBI has not yet been examined. Therefore, in the present study the iPBI was administered to university sport and exercise students (n = 160) and academy soccer athletes (n = 75) at three-time points. Time point two occurred 7 days after time point one, and time point three occurred 21 days after time point two. In addition, social desirability was also measured. Repeated-measures MANCOVAs, intra-class coefficients, and Pearson's (r) correlations demonstrate that the iPBI has good test-retest reliability, with iPBI scores remaining stable across the three-time points. Pearson's correlation coefficients revealed no relationships between the iPBI and social desirability, indicating that the iPBI is not highly susceptible to response bias. The results are discussed with reference to the continued usage and development of the iPBI, and future research recommendations relating to the investigation of irrational performance beliefs are proposed.

  7. Legibility Evaluation Using Point-of-regard Measurement

    NASA Astrophysics Data System (ADS)

    Saito, Daisuke; Saito, Keiichi; Saito, Masao

    Web site visibility has become important because of the rapid dissemination of World Wide Web, and combinations of foreground and background colors are crucial in providing high visibility. In our previous studies, the visibilities of several web-safe color combinations were examined using a psychological method. In those studies, simple stimuli were used because of experimental restriction. In this paper, legibility of sentences on web sites was examined using a psychophisiological method, point-of-regard measurement, to obtain other practical data. Ten people with normal color sensations ranging from ages 21 to 29 were recruited. The number of characters per line in each page was arranged in the same number, and the four representative achromatic web-safe colors, that is, #000000, #666666, #999999 and #CCCCCC, were examined. The reading time per character and the gaze time per line were obtained from point-of-regard measurement, and the normalized with the reading time and the gaze time of the three colors were calculated and compared. As the results, it was shown that the time of reading and gaze become long at the same ratio when the contrast decreases by point-of-regard measurement. Therefore, it was indicated that the legibility of color combinations could be estimated by point-of-regard measurement.

  8. Study protocol to examine the effects of spaceflight and a spaceflight analog on neurocognitive performance: extent, longevity, and neural bases.

    PubMed

    Koppelmans, Vincent; Erdeniz, Burak; De Dios, Yiri E; Wood, Scott J; Reuter-Lorenz, Patricia A; Kofman, Igor; Bloomberg, Jacob J; Mulavara, Ajitkumar P; Seidler, Rachael D

    2013-12-18

    Long duration spaceflight (i.e., 22 days or longer) has been associated with changes in sensorimotor systems, resulting in difficulties that astronauts experience with posture control, locomotion, and manual control. The microgravity environment is an important causal factor for spaceflight induced sensorimotor changes. Whether spaceflight also affects other central nervous system functions such as cognition is yet largely unknown, but of importance in consideration of the health and performance of crewmembers both in- and post-flight. We are therefore conducting a controlled prospective longitudinal study to investigate the effects of spaceflight on the extent, longevity and neural bases of sensorimotor and cognitive performance changes. Here we present the protocol of our study. This study includes three groups (astronauts, bed rest subjects, ground-based control subjects) for which each the design is single group with repeated measures. The effects of spaceflight on the brain will be investigated in astronauts who will be assessed at two time points pre-, at three time points during-, and at four time points following a spaceflight mission of six months. To parse out the effect of microgravity from the overall effects of spaceflight, we investigate the effects of seventy days head-down tilted bed rest. Bed rest subjects will be assessed at two time points before-, two time points during-, and three time points post-bed rest. A third group of ground based controls will be measured at four time points to assess reliability of our measures over time. For all participants and at all time points, except in flight, measures of neurocognitive performance, fine motor control, gait, balance, structural MRI (T1, DTI), task fMRI, and functional connectivity MRI will be obtained. In flight, astronauts will complete some of the tasks that they complete pre- and post flight, including tasks measuring spatial working memory, sensorimotor adaptation, and fine motor performance. Potential changes over time and associations between cognition, motor-behavior, and brain structure and function will be analyzed. This study explores how spaceflight induced brain changes impact functional performance. This understanding could aid in the design of targeted countermeasures to mitigate the negative effects of long-duration spaceflight.

  9. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter.

    PubMed

    Liu, Wanli

    2017-03-08

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.

  10. Seasonal variations in body composition, maximal oxygen uptake, and gas exchange threshold in cross-country skiers.

    PubMed

    Polat, Metin; Korkmaz Eryılmaz, Selcen; Aydoğan, Sami

    2018-01-01

    In order to ensure that athletes achieve their highest performance levels during competitive seasons, monitoring their long-term performance data is crucial for understanding the impact of ongoing training programs and evaluating training strategies. The present study was thus designed to investigate the variations in body composition, maximal oxygen uptake (VO 2max ), and gas exchange threshold values of cross-country skiers across training phases throughout a season. In total, 15 athletes who participate in international cross-country ski competitions voluntarily took part in this study. The athletes underwent incremental treadmill running tests at 3 different time points over a period of 1 year. The first measurements were obtained in July, during the first preparation period; the second measurements were obtained in October, during the second preparation period; and the third measurements were obtained in February, during the competition period. Body weight, body mass index (BMI), body fat (%), as well as VO 2max values and gas exchange threshold, measured using V-slope method during the incremental running tests, were assessed at all 3 time points. The collected data were analyzed using SPSS 20 package software. Significant differences between the measurements were assessed using Friedman's twoway variance analysis with a post hoc option. The athletes' body weights and BMI measurements at the third point were significantly lower compared with the results of the second measurement ( p <0.001). Moreover, the incremental running test time was significantly higher at the third measurement, compared with both the first ( p <0.05) and the second ( p <0.01) measurements. Similarly, the running speed during the test was significantly higher at the third measurement time point compared with the first measurement time point ( p <0.05). Body fat (%), time to reach the gas exchange threshold, running speed at the gas exchange threshold, VO 2max , amount of oxygen consumed at gas exchange threshold level (VO 2GET ), maximal heart rate (HR max ), and heart rate at gas exchange threshold level (HR GET ) values did not significantly differ between the measurement time points ( p >0.05). VO 2max and gas exchange threshold values recorded during the third measurements, the timing of which coincided with the competitive season of the cross-country skiers, did not significantly change, but their incremental running test time and running speed significantly increased while their body weight and BMI significantly decreased. These results indicate that the cross-country skiers developed a tolerance for high-intensity exercise and reached their highest level of athletic performance during the competitive season.

  11. A new method of time difference measurement: The time difference method by dual phase coincidence points detection

    NASA Technical Reports Server (NTRS)

    Zhou, Wei

    1993-01-01

    In the high accurate measurement of periodic signals, the greatest common factor frequency and its characteristics have special functions. A method of time difference measurement - the time difference method by dual 'phase coincidence points' detection is described. This method utilizes the characteristics of the greatest common factor frequency to measure time or phase difference between periodic signals. It can suit a very wide frequency range. Measurement precision and potential accuracy of several picoseconds were demonstrated with this new method. The instrument based on this method is very simple, and the demand for the common oscillator is low. This method and instrument can be used widely.

  12. Study protocol to examine the effects of spaceflight and a spaceflight analog on neurocognitive performance: extent, longevity, and neural bases

    PubMed Central

    2013-01-01

    Background Long duration spaceflight (i.e., 22 days or longer) has been associated with changes in sensorimotor systems, resulting in difficulties that astronauts experience with posture control, locomotion, and manual control. The microgravity environment is an important causal factor for spaceflight induced sensorimotor changes. Whether spaceflight also affects other central nervous system functions such as cognition is yet largely unknown, but of importance in consideration of the health and performance of crewmembers both in- and post-flight. We are therefore conducting a controlled prospective longitudinal study to investigate the effects of spaceflight on the extent, longevity and neural bases of sensorimotor and cognitive performance changes. Here we present the protocol of our study. Methods/design This study includes three groups (astronauts, bed rest subjects, ground-based control subjects) for which each the design is single group with repeated measures. The effects of spaceflight on the brain will be investigated in astronauts who will be assessed at two time points pre-, at three time points during-, and at four time points following a spaceflight mission of six months. To parse out the effect of microgravity from the overall effects of spaceflight, we investigate the effects of seventy days head-down tilted bed rest. Bed rest subjects will be assessed at two time points before-, two time points during-, and three time points post-bed rest. A third group of ground based controls will be measured at four time points to assess reliability of our measures over time. For all participants and at all time points, except in flight, measures of neurocognitive performance, fine motor control, gait, balance, structural MRI (T1, DTI), task fMRI, and functional connectivity MRI will be obtained. In flight, astronauts will complete some of the tasks that they complete pre- and post flight, including tasks measuring spatial working memory, sensorimotor adaptation, and fine motor performance. Potential changes over time and associations between cognition, motor-behavior, and brain structure and function will be analyzed. Discussion This study explores how spaceflight induced brain changes impact functional performance. This understanding could aid in the design of targeted countermeasures to mitigate the negative effects of long-duration spaceflight. PMID:24350728

  13. Far field and wavefront characterization of a high-power semiconductor laser for free space optical communications

    NASA Technical Reports Server (NTRS)

    Cornwell, Donald M., Jr.; Saif, Babak N.

    1991-01-01

    The spatial pointing angle and far field beamwidth of a high-power semiconductor laser are characterized as a function of CW power and also as a function of temperature. The time-averaged spatial pointing angle and spatial lobe width were measured under intensity-modulated conditions. The measured pointing deviations are determined to be well within the pointing requirements of the NASA Laser Communications Transceiver (LCT) program. A computer-controlled Mach-Zehnder phase-shifter interferometer is used to characterize the wavefront quality of the laser. The rms phase error over the entire pupil was measured as a function of CW output power. Time-averaged measurements of the wavefront quality are also made under intensity-modulated conditions. The measured rms phase errors are determined to be well within the wavefront quality requirements of the LCT program.

  14. The Relationship between OCT-measured Central Retinal Thickness and Visual Acuity in Diabetic Macular Edema

    PubMed Central

    2008-01-01

    Objective To compare optical coherence tomography (OCT)-measured retinal thickness and visual acuity in eyes with diabetic macular edema (DME) both before and after macular laser photocoagulation. Design Cross-sectional and longitudinal study. Participants 210 subjects (251 eyes) with DME enrolled in a randomized clinical trial of laser techniques. Methods Retinal thickness was measured with OCT and visual acuity was measured with the electronic-ETDRS procedure. Main Outcome Measures OCT-measured center point thickness and visual acuity Results The correlation coefficients for visual acuity versus OCT center point thickness were 0.52 at baseline and 0.49, 0.36, and 0.38 at 3.5, 8, and 12 months post-laser photocoagulation. The slope of the best fit line to the baseline data was approximately 4.4 letters (95% C.I.: 3.5, 5.3) better visual acuity for every 100 microns decrease in center point thickness at baseline with no important difference at follow-up visits. Approximately one-third of the variation in visual acuity could be predicted by a linear regression model that incorporated OCT center point thickness, age, hemoglobin A1C, and severity of fluorescein leakage in the center and inner subfields. The correlation between change in visual acuity and change in OCT center point thickening 3.5 months after laser treatment was 0.44 with no important difference at the other follow-up times. A subset of eyes showed paradoxical improvements in visual acuity with increased center point thickening (7–17% at the three time points) or paradoxical worsening of visual acuity with a decrease in center point thickening (18%–26% at the three time points). Conclusions There is modest correlation between OCT-measured center point thickness and visual acuity, and modest correlation of changes in retinal thickening and visual acuity following focal laser treatment for DME. However, a wide range of visual acuity may be observed for a given degree of retinal edema and paradoxical increases in center point thickening with increases in visual acuity as well as paradoxical decreases in center point thickening with decreases in visual acuity were not uncommon. Thus, although OCT measurements of retinal thickness represent an important tool in clinical evaluation, they cannot reliably substitute as a surrogate for visual acuity at a given point in time. This study does not address whether short-term changes on OCT are predictive of long-term effects on visual acuity. PMID:17123615

  15. Set membership experimental design for biological systems.

    PubMed

    Marvel, Skylar W; Williams, Cranos M

    2012-03-21

    Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models.

  16. Set membership experimental design for biological systems

    PubMed Central

    2012-01-01

    Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240

  17. Objective evaluation of female feet and leg joint conformation at time of selection and post first parity in swine.

    PubMed

    Stock, J D; Calderón Díaz, J A; Rothschild, M F; Mote, B E; Stalder, K J

    2018-06-09

    Feet and legs of replacement females were objectively evaluated at selection, i.e. approximately 150 days of age (n=319) and post first parity, i.e. any time after weaning of first litter and before 2nd parturition (n=277) to 1) compare feet and leg joint angle ranges between selection and post first parity; 2) identify feet and leg joint angle differences between selection and first three weeks of second gestation; 3) identify feet and leg join angle differences between farms and gestation days during second gestation; and 4) obtain genetic variance components for conformation angles for the two time points measured. Angles for carpal joint (knee), metacarpophalangeal joint (front pastern), metatarsophalangeal joint (rear pastern), tarsal joint (hock), and rear stance were measured using image analysis software. Between selection and post first parity significant differences were observed for all joints measured (P < 0.05). Knee, front and rear pastern angles were less (more flexion), and hock angles were greater (less flexion) as age progressed (P < 0.05), while the rear stance pattern was less (feet further under center) at selection than post first parity (only including measures during first three weeks of second gestation). Only using post first parity leg conformation information, farm was a significant source of variation for front and rear pasterns and rear stance angle measurements (P < 0.05). Knee angle was less (more flexion) (P < 0.05) as gestation age progressed. Heritability estimates were low to moderate (0.04 - 0.35) for all traits measured across time points. Genetic correlations between the same joints at different time points were high (> 0.8) between the front leg joints and low (<0.2) between the rear leg joints. High genetic correlations between time points indicate that the trait can be considered the same at either time point, and low genetic correlations indicate that the trait at different time points should be considered as two separate traits. Minimal change in the front leg suggests conformation traits that remain between selection and post first parity, while larger changes in rear leg indicate that rear leg conformation traits should be evaluated at multiple time periods.

  18. Reduction of Averaging Time for Evaluation of Human Exposure to Radiofrequency Electromagnetic Fields from Cellular Base Stations

    NASA Astrophysics Data System (ADS)

    Kim, Byung Chan; Park, Seong-Ook

    In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.

  19. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter

    PubMed Central

    Liu, Wanli

    2017-01-01

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated. PMID:28282897

  20. Do They Know Their ABCs? Letter-Name Knowledge of Urban Preschoolers

    ERIC Educational Resources Information Center

    Edwards, Liesl

    2012-01-01

    This study analyzed the performance and growth in letter knowledge and letter identification skills of children across an academic year. Repeated measures analyses of variance were conducted on letter name knowledge measures administered at three time points for all participating children (N = 177) and seven time points for children (n = 106)…

  1. Study into Point Cloud Geometric Rigidity and Accuracy of TLS-Based Identification of Geometric Bodies

    NASA Astrophysics Data System (ADS)

    Klapa, Przemyslaw; Mitka, Bartosz; Zygmunt, Mariusz

    2017-12-01

    Capability of obtaining a multimillion point cloud in a very short time has made the Terrestrial Laser Scanning (TLS) a widely used tool in many fields of science and technology. The TLS accuracy matches traditional devices used in land surveying (tacheometry, GNSS - RTK), but like any measurement it is burdened with error which affects the precise identification of objects based on their image in the form of a point cloud. The point’s coordinates are determined indirectly by means of measuring the angles and calculating the time of travel of the electromagnetic wave. Each such component has a measurement error which is translated into the final result. The XYZ coordinates of a measuring point are determined with some uncertainty and the very accuracy of determining these coordinates is reduced as the distance to the instrument increases. The paper presents the results of examination of geometrical stability of a point cloud obtained by means terrestrial laser scanner and accuracy evaluation of solids determined using the cloud. Leica P40 scanner and two different settings of measuring points were used in the tests. The first concept involved placing a few balls in the field and then scanning them from various sides at similar distances. The second part of measurement involved placing balls and scanning them a few times from one side but at varying distances from the instrument to the object. Each measurement encompassed a scan of the object with automatic determination of its position and geometry. The desk studies involved a semiautomatic fitting of solids and measurement of their geometrical elements, and comparison of parameters that determine their geometry and location in space. The differences of measures of geometrical elements of balls and translations vectors of the solids centres indicate the geometrical changes of the point cloud depending on the scanning distance and parameters. The results indicate the changes in the geometry of scanned objects depending on the point cloud quality and distance from the measuring instrument. Varying geometrical dimensions of the same element suggest also that the point cloud does not keep a stable geometry of measured objects.

  2. Precision pointing compensation for DSN antennas with optical distance measuring sensors

    NASA Technical Reports Server (NTRS)

    Scheid, R. E.

    1989-01-01

    The pointing control loops of Deep Space Network (DSN) antennas do not account for unmodeled deflections of the primary and secondary reflectors. As a result, structural distortions due to unpredictable environmental loads can result in uncompensated boresight shifts which degrade pointing accuracy. The design proposed here can provide real-time bias commands to the pointing control system to compensate for environmental effects on pointing performance. The bias commands can be computed in real time from optically measured deflections at a number of points on the primary and secondary reflectors. Computer simulations with a reduced-order finite-element model of a DSN antenna validate the concept and lead to a proposed design by which a ten-to-one reduction in pointing uncertainty can be achieved under nominal uncertainty conditions.

  3. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  4. Detecting multiple moving objects in crowded environments with coherent motion regions

    DOEpatents

    Cheriyadat, Anil M.; Radke, Richard J.

    2013-06-11

    Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.

  5. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    2016-06-15

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  6. Effect of Receiver Choosing on Point Positions Determination in Network RTK

    NASA Astrophysics Data System (ADS)

    Bulbul, Sercan; Inal, Cevat

    2016-04-01

    Nowadays, the developments in GNSS technique allow to determinate point positioning in real time. Initially, point positioning was determined by RTK (Real Time Kinematic) based on a reference station. But, to avoid systematic errors in this method, distance between the reference points and rover receiver must be shorter than10 km. To overcome this restriction in RTK method, the idea of setting more than one reference point had been suggested and, CORS (Continuously Operations Reference Systems) was put into practice. Today, countries like ABD, Germany, Japan etc. have set CORS network. CORS-TR network which has 146 reference points has also been established in 2009 in Turkey. In CORS-TR network, active CORS approach was adopted. In Turkey, CORS-TR reference stations covering whole country are interconnected and, the positions of these stations and atmospheric corrections are continuously calculated. In this study, in a selected point, RTK measurements based on CORS-TR, were made with different receivers (JAVAD TRIUMPH-1, TOPCON Hiper V, MAGELLAN PRoMark 500, PENTAX SMT888-3G, SATLAB SL-600) and with different correction techniques (VRS, FKP, MAC). In the measurements, epoch interval was taken as 5 seconds and measurement time as 1 hour. According to each receiver and each correction technique, means and differences between maximum and minimum values of measured coordinates, root mean squares in the directions of coordinate axis and 2D and 3D positioning precisions were calculated, the results were evaluated by statistical methods and the obtained graphics were interpreted. After evaluation of the measurements and calculations, for each receiver and each correction technique; the coordinate differences between maximum and minimum values were measured to be less than 8 cm, root mean squares in coordinate axis directions less than ±1.5 cm, 2D point positioning precisions less than ±1.5 cm and 3D point positioning precisions less than ±1.5 cm. In the measurement point, it has been concluded that VRS correction technique is generally better than other corrections techniques.

  7. Comparison between psycho-acoustics and physio-acoustic measurement to determine optimum reverberation time of pentatonic angklung music concert hall

    NASA Astrophysics Data System (ADS)

    Sudarsono, Anugrah S.; Merthayasa, I. G. N.; Suprijanto

    2015-09-01

    This research tried to compare psycho-acoustics and Physio-acoustic measurement to find the optimum reverberation time of soundfield from angklung music. Psycho-acoustic measurement was conducted using a paired comparison method and Physio-acoustic measurement was conducted with EEG Measurement on T3, T4, FP1, and FP2 measurement points. EEG measurement was conducted with 5 persons. Pentatonic angklung music was used as a stimulus with reverberation time variation. The variation was between 0.8 s - 1.6 s with 0.2 s step. EEG signal was analysed using a Power Spectral Density method on Alpha Wave, High Alpha Wave, and Theta Wave. Psycho-acoustic measurement on 50 persons showed that reverberation time preference of pentatonic angklung music was 1.2 second. The result was similar to Theta Wave measurement on FP2 measurement point. High Alpha wave on T4 measurement gave different results, but had similar patterns with psycho-acoustic measurement

  8. Spatial Correlation of Solar-Wind Turbulence from Two-Point Measurements

    NASA Technical Reports Server (NTRS)

    Matthaeus, W. H.; Milano, L. J.; Dasso, S.; Weygand, J. M.; Smith, C. W.; Kivelson, M. G.

    2005-01-01

    Interplanetary turbulence, the best studied case of low frequency plasma turbulence, is the only directly quantified instance of astrophysical turbulence. Here, magnetic field correlation analysis, using for the first time only proper two-point, single time measurements, provides a key step in unraveling the space-time structure of interplanetary turbulence. Simultaneous magnetic field data from the Wind, ACE, and Cluster spacecraft are analyzed to determine the correlation (outer) scale, and the Taylor microscale near Earth's orbit.

  9. Modeling Canadian Quality Control Test Program for Steroid Hormone Receptors in Breast Cancer: Diagnostic Accuracy Study.

    PubMed

    Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan

    The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.

  10. Girl in the cellar: a repeated cross-sectional investigation of belief in conspiracy theories about the kidnapping of Natascha Kampusch

    PubMed Central

    Stieger, Stefan; Gumhalter, Nora; Tran, Ulrich S.; Voracek, Martin; Swami, Viren

    2013-01-01

    The present study utilized a repeated cross-sectional survey design to examine belief in conspiracy theories about the abduction of Natascha Kampusch. At two time points (October 2009 and October 2011), participants drawn from independent cross-sections of the Austrian population (Time Point 1, N = 281; Time Point 2, N = 277) completed a novel measure of belief in conspiracy theories concerning the abduction of Kampusch, as well as measures of general conspiracist ideation, self-esteem, paranormal and superstitious beliefs, cognitive ability, and media exposure to the Kampusch case. Results indicated that although belief in the Kampusch conspiracy theory declined between testing periods, the effect size of the difference was small. In addition, belief in the Kampusch conspiracy theory was significantly predicted by general conspiracist ideation at both time points. The need to conduct further longitudinal tests of conspiracist ideation is emphasized in conclusion. PMID:23745118

  11. Methane oxidation at a surface-sealed boreal landfill.

    PubMed

    Einola, Juha; Sormunen, Kai; Lensu, Anssi; Leiskallio, Antti; Ettala, Matti; Rintala, Jukka

    2009-07-01

    Methane oxidation was studied at a closed boreal landfill (area 3.9 ha, amount of deposited waste 200,000 tonnes) equipped with a passive gas collection and distribution system and a methane oxidative top soil cover integrated in a European Union landfill directive-compliant, multilayer final cover. Gas wells and distribution pipes with valves were installed to direct landfill gas through the water impermeable layer into the top soil cover. Mean methane emissions at the 25 measuring points at four measurement times (October 2005-June 2006) were 0.86-6.2 m(3) ha(-1) h(-1). Conservative estimates indicated that at least 25% of the methane flux entering the soil cover at the measuring points was oxidized in October and February, and at least 46% in June. At each measurement time, 1-3 points showed significantly higher methane fluxes into the soil cover (20-135 m(3) ha(-1) h(-1)) and methane emissions (6-135 m(3) ha(-1) h(-1)) compared to the other points (< 20 m(3) ha(-1) h(-1) and < 10 m(3) ha(-1) h(-1), respectively). These points of methane overload had a high impact on the mean methane oxidation at the measuring points, resulting in zero mean oxidation at one measurement time (November). However, it was found that by adjusting the valves in the gas distribution pipes the occurrence of methane overload can be to some extent moderated which may increase methane oxidation. Overall, the investigated landfill gas treatment concept may be a feasible option for reducing methane emissions at landfills where a water impermeable cover system is used.

  12. A Bionic Camera-Based Polarization Navigation Sensor

    PubMed Central

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-01-01

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029

  13. Application of spatial time domain reflectometry measurements in heterogeneous, rocky substrates

    NASA Astrophysics Data System (ADS)

    Gonzales, C.; Scheuermann, A.; Arnold, S.; Baumgartl, T.

    2016-10-01

    Measurement of soil moisture across depths using sensors is currently limited to point measurements or remote sensing technologies. Point measurements have limitations on spatial resolution, while the latter, although covering large areas may not represent real-time hydrologic processes, especially near the surface. The objective of the study was to determine the efficacy of elongated soil moisture probes—spatial time domain reflectometry (STDR)—and to describe transient soil moisture dynamics of unconsolidated mine waste rock materials. The probes were calibrated under controlled conditions in the glasshouse. Transient soil moisture content was measured using the gravimetric method and STDR. Volumetric soil moisture content derived from weighing was compared with values generated from a numerical model simulating the drying process. A calibration function was generated and applied to STDR field data sets. The use of elongated probes effectively assists in the real-time determination of the spatial distribution of soil moisture. It also allows hydrologic processes to be uncovered in the unsaturated zone, especially for water balance calculations that are commonly based on point measurements. The elongated soil moisture probes can potentially describe transient substrate processes and delineate heterogeneity in terms of the pore size distribution in a seasonally wet but otherwise arid environment.

  14. Fiber optic sensor employing successively destroyed coupled points or reflectors for detecting shock wave speed and damage location

    DOEpatents

    Weiss, Jonathan D.

    1995-01-01

    A shock velocity and damage location sensor providing a means of measuring shock speed and damage location. The sensor consists of a long series of time-of-arrival "points" constructed with fiber optics. The fiber optic sensor apparatus measures shock velocity as the fiber sensor is progressively crushed as a shock wave proceeds in a direction along the fiber. The light received by a receiving means changes as time-of-arrival points are destroyed as the sensor is disturbed by the shock. The sensor may comprise a transmitting fiber bent into a series of loops and fused to a receiving fiber at various places, time-of-arrival points, along the receiving fibers length. At the "points" of contact, where a portion of the light leaves the transmitting fiber and enters the receiving fiber, the loops would be required to allow the light to travel backwards through the receiving fiber toward a receiving means. The sensor may also comprise a single optical fiber wherein the time-of-arrival points are comprised of reflection planes distributed along the fibers length. In this configuration, as the shock front proceeds along the fiber it destroys one reflector after another. The output received by a receiving means from this sensor may be a series of downward steps produced as the shock wave destroys one time-of-arrival point after another, or a nonsequential pattern of steps in the event time-of-arrival points are destroyed at any point along the sensor.

  15. Fiber optic sensor employing successively destroyed coupled points or reflectors for detecting shock wave speed and damage location

    DOEpatents

    Weiss, J.D.

    1995-08-29

    A shock velocity and damage location sensor providing a means of measuring shock speed and damage location is disclosed. The sensor consists of a long series of time-of-arrival ``points`` constructed with fiber optics. The fiber optic sensor apparatus measures shock velocity as the fiber sensor is progressively crushed as a shock wave proceeds in a direction along the fiber. The light received by a receiving means changes as time-of-arrival points are destroyed as the sensor is disturbed by the shock. The sensor may comprise a transmitting fiber bent into a series of loops and fused to a receiving fiber at various places, time-of-arrival points, along the receiving fibers length. At the ``points`` of contact, where a portion of the light leaves the transmitting fiber and enters the receiving fiber, the loops would be required to allow the light to travel backwards through the receiving fiber toward a receiving means. The sensor may also comprise a single optical fiber wherein the time-of-arrival points are comprised of reflection planes distributed along the fibers length. In this configuration, as the shock front proceeds along the fiber it destroys one reflector after another. The output received by a receiving means from this sensor may be a series of downward steps produced as the shock wave destroys one time-of-arrival point after another, or a nonsequential pattern of steps in the event time-of-arrival points are destroyed at any point along the sensor. 6 figs.

  16. Development of measurement simulation of the laser dew-point hygrometer using an optical fiber cable

    NASA Astrophysics Data System (ADS)

    Matsumoto, Shigeaki

    2005-02-01

    In order to improve the initial and the response times of the Laser Dew-Point Hygrometer (LDH), the measurement simulation was developed on the basis of the loop computation of the surface temperature of a gold plate for dew depostition, the quantity of deposited dew and the intensity of scattered light from the surface of the plate at time interval of 5 sec during measurement. A more detailed relationship between the surface temperature of the plate and the cooling current, and the time constant of the integrator in the control circuit of the LDH were introduced in the simulation program as a function of atmospheric temperature. The simulation was more close to the actual measurement by the LDH. The simulation results indicated the possibility of improving both the times of teh LDH by the increase of the sensitivity of dew and that of the mass transfer coefficient of dew deposited on the plate surface. It was concluded that the initial and the response times could be improved to below 100sec and 120 sec, respectively in the dew-point range at room temperature, that are almost half of the those times of the original LDH.

  17. Sedentary Behaviour Profiling of Office Workers: A Sensitivity Analysis of Sedentary Cut-Points

    PubMed Central

    Boerema, Simone T.; Essink, Gerard B.; Tönis, Thijs M.; van Velsen, Lex; Hermens, Hermie J.

    2015-01-01

    Measuring sedentary behaviour and physical activity with wearable sensors provides detailed information on activity patterns and can serve health interventions. At the basis of activity analysis stands the ability to distinguish sedentary from active time. As there is no consensus regarding the optimal cut-point for classifying sedentary behaviour, we studied the consequences of using different cut-points for this type of analysis. We conducted a battery of sitting and walking activities with 14 office workers, wearing the Promove 3D activity sensor to determine the optimal cut-point (in counts per minute (m·s−2)) for classifying sedentary behaviour. Then, 27 office workers wore the sensor for five days. We evaluated the sensitivity of five sedentary pattern measures for various sedentary cut-points and found an optimal cut-point for sedentary behaviour of 1660 × 10−3 m·s−2. Total sedentary time was not sensitive to cut-point changes within ±10% of this optimal cut-point; other sedentary pattern measures were not sensitive to changes within the ±20% interval. The results from studies analyzing sedentary patterns, using different cut-points, can be compared within these boundaries. Furthermore, commercial, hip-worn activity trackers can implement feedback and interventions on sedentary behaviour patterns, using these cut-points. PMID:26712758

  18. A new method of real-time detection of changes in periodic data stream

    NASA Astrophysics Data System (ADS)

    Lyu, Chen; Lu, Guoliang; Cheng, Bin; Zheng, Xiangwei

    2017-07-01

    The change point detection in periodic time series is much desirable in many practical usages. We present a novel algorithm for this task, which includes two phases: 1) anomaly measure- on the basis of a typical regression model, we propose a new computation method to measure anomalies in time series which does not require any reference data from other measurement(s); 2) change detection- we introduce a new martingale test for detection which can be operated in an unsupervised and nonparametric way. We have conducted extensive experiments to systematically test our algorithm. The results make us believe that our algorithm can be directly applicable in many real-world change-point-detection applications.

  19. Reverse engineering gene regulatory networks from measurement with missing values.

    PubMed

    Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong

    2016-12-01

    Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence, more accurate prediction of the underlying GRN compared to when using the conventional Gaussian approximation (GA) filters ignoring the missing data points.

  20. On the effective point of measurement in megavoltage photon beams.

    PubMed

    Kawrakow, Iwan

    2006-06-01

    This paper presents a numerical investigation of the effective point of measurement of thimble ionization chambers in megavoltage photon beams using Monte Carlo simulations with the EGSNRC system. It is shown that the effective point of measurement for relative photon beam dosimetry depends on every detail of the chamber design, including the cavity length, the mass density of the wall material, and the size of the central electrode, in addition to the cavity radius. Moreover, the effective point of measurement also depends on the beam quality and the field size. The paper therefore argues that the upstream shift of 0.6 times the cavity radius, recommended in current dosimetry protocols, is inadequate for accurate relative photon beam dosimetry, particularly in the build-up region. On the other hand, once the effective point of measurement is selected appropriately, measured depth-ionization curves can be equated to measured depth-dose curves for all depths within +/- 0.5%.

  1. The guidance methodology of a new automatic guided laser theodolite system

    NASA Astrophysics Data System (ADS)

    Zhang, Zili; Zhu, Jigui; Zhou, Hu; Ye, Shenghua

    2008-12-01

    Spatial coordinate measurement systems such as theodolites, laser trackers and total stations have wide application in manufacturing and certification processes. The traditional operation of theodolites is manual and time-consuming which does not meet the need of online industrial measurement, also laser trackers and total stations need reflective targets which can not realize noncontact and automatic measurement. A new automatic guided laser theodolite system is presented to achieve automatic and noncontact measurement with high precision and efficiency which is comprised of two sub-systems: the basic measurement system and the control and guidance system. The former system is formed by two laser motorized theodolites to accomplish the fundamental measurement tasks while the latter one consists of a camera and vision system unit mounted on a mechanical displacement unit to provide azimuth information of the measured points. The mechanical displacement unit can rotate horizontally and vertically to direct the camera to the desired orientation so that the camera can scan every measured point in the measuring field, then the azimuth of the corresponding point is calculated for the laser motorized theodolites to move accordingly to aim at it. In this paper the whole system composition and measuring principle are analyzed, and then the emphasis is laid on the guidance methodology for the laser points from the theodolites to move towards the measured points. The guidance process is implemented based on the coordinate transformation between the basic measurement system and the control and guidance system. With the view field angle of the vision system unit and the world coordinate of the control and guidance system through coordinate transformation, the azimuth information of the measurement area that the camera points at can be attained. The momentary horizontal and vertical changes of the mechanical displacement movement are also considered and calculated to provide real time azimuth information of the pointed measurement area by which the motorized theodolite will move accordingly. This methodology realizes the predetermined location of the laser points which is within the camera-pointed scope so that it accelerates the measuring process and implements the approximate guidance instead of manual operations. The simulation results show that the proposed method of automatic guidance is effective and feasible which provides good tracking performance of the predetermined location of laser points.

  2. Control-enhanced multiparameter quantum estimation

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Yuan, Haidong

    2017-10-01

    Most studies in multiparameter estimation assume the dynamics is fixed and focus on identifying the optimal probe state and the optimal measurements. In practice, however, controls are usually available to alter the dynamics, which provides another degree of freedom. In this paper we employ optimal control methods, particularly the gradient ascent pulse engineering (GRAPE), to design optimal controls for the improvement of the precision limit in multiparameter estimation. We show that the controlled schemes are not only capable to provide a higher precision limit, but also have a higher stability to the inaccuracy of the time point performing the measurements. This high time stability will benefit the practical metrology, where it is hard to perform the measurement at a very accurate time point due to the response time of the measurement apparatus.

  3. Physical activity, sedentary behavior, and academic performance in Finnish children.

    PubMed

    Syväoja, Heidi J; Kantomaa, Marko T; Ahonen, Timo; Hakonen, Harto; Kankaanpää, Anna; Tammelin, Tuija H

    2013-11-01

    This study aimed to determine the relationships between objectively measured and self-reported physical activity, sedentary behavior, and academic performance in Finnish children. Two hundred and seventy-seven children from five schools in the Jyväskylä school district in Finland (58% of the 475 eligible students, mean age = 12.2 yr, 56% girls) participated in the study in the spring of 2011. Self-reported physical activity and screen time were evaluated with questions used in the WHO Health Behavior in School-Aged Children study. Children's physical activity and sedentary time were measured objectively by using an ActiGraph GT1M/GT3X accelerometer for seven consecutive days. A cutoff value of 2296 counts per minute was used for moderate-to-vigorous physical activity (MVPA) and 100 counts per minute for sedentary time. Grade point averages were provided by the education services of the city of Jyväskylä. ANOVA and linear regression analysis were used to analyze the relationships among physical activity, sedentary behavior, and academic performance. Objectively measured MVPA (P = 0.955) and sedentary time (P = 0.285) were not associated with grade point average. However, self-reported MVPA had an inverse U-shaped curvilinear association with grade point average (P = 0.001), and screen time had a linear negative association with grade point average (P = 0.002), after adjusting for sex, children's learning difficulties, highest level of parental education, and amount of sleep. In this study, self-reported physical activity was directly, and screen time inversely, associated with academic achievement. Objectively measured physical activity and sedentary time were not associated with academic achievement. Objective and subjective measures may reflect different constructs and contexts of physical activity and sedentary behavior in association with academic outcomes.

  4. Platelet-activated clotting time does not measure platelet reactivity during cardiac surgery.

    PubMed

    Shore-Lesserson, L; Ammar, T; DePerio, M; Vela-Cantos, F; Fisher, C; Sarier, K

    1999-08-01

    Platelet dysfunction is a major contributor to bleeding after cardiopulmonary bypass (CPB), yet it remains difficult to diagnose. A point-of-care monitor, the platelet-activated clotting time (PACT), measures accelerated shortening of the kaolin-activated clotting time by addition of platelet activating factor. The authors sought to evaluate the clinical utility of the PACT by conducting serial measurements of PACT during cardiac surgery and correlating postoperative measurements with blood loss. In 50 cardiac surgical patients, blood was sampled at 10 time points to measure PACT. Simultaneously, platelet reactivity was measured by the thrombin receptor agonist peptide-induced expression of P-selectin, using flow cytometry. These tests were temporally analyzed. PACT values, P-selectin expression, and other coagulation tests were analyzed for correlation with postoperative chest tube drainage. PACT and P-selectin expression were maximally reduced after protamine administration. Changes in PACT did not correlate with changes in P-selectin expression at any time interval. Total 8-h chest tube drainage did not correlate with any coagulation test at any time point except with P-selectin expression after protamine administration (r = -0.4; P = 0.03). The platelet dysfunction associated with CPB may be a result of depressed platelet reactivity, as shown by thrombin receptor activating peptide-induced P-selectin expression. Changes in PACT did not correlate with blood loss or with changes in P-selectin expression suggesting that PACT is not a specific measure of platelet reactivity.

  5. Validation of a new noniterative method for accurate position determination of a scanning laser vibrometer

    NASA Astrophysics Data System (ADS)

    Pauwels, Steven; Boucart, Nick; Dierckx, Benoit; Van Vlierberghe, Pieter

    2000-05-01

    The use of a scanning laser Doppler vibrometer for vibration testing is becoming a popular instrument. The scanning laser Doppler vibrometer is a non-contacting transducer that can measure many points at a high spatial resolution in a short time. Manually aiming the laser beam at the points that need to be measured is very time consuming. In order to use it effectively, the position of the laser Doppler vibrometer needs to be determined relative to the structure. If the position of the laser Doppler vibrometer is known, any visible point on the structure can be hit and measured automatically. A new algorithm for this position determination is developed, based on a geometry model of the structure. After manually aiming the laser beam at 4 or more known points, the laser position and orientation relative to the structure is determined. Using this calculated position and orientation a list with the mirror angles for every measurement point is generated, which is used during the measurement. The algorithm is validated using 3 practical cases. In the first case a plate is used of which the points are measured very accurately, so the geometry model is assumed to be perfect. The second case is a brake disc. Here the geometry points are measured with a ruler, thus not so accurate. The final validation is done on a body in white of a car. A reduced finite element model is used as geometry model. This calibration shows that the new algorithm is very effective and practically usable.

  6. Far and Wide - Microbial Bebop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peter Larsen

    2012-10-01

    This musical composition was created from data of microbes (bacteria, algae and other microorganisms) sampled in the English Channel. Argonne National Laboratory biologist Peter Larsen created the songs as a unique way to present and comprehend large datasets. Microbial species of the Order Rickettsiales, such as the highly abundant, free-living planktonic species Pelagibacter ubique, are typical highly abundant taxa in L4 Station data. Its relative abundance in the microbial community at L4 Station follows a distinctive seasonal pattern. In this composition, there are two chords per measure, generated from photosynthetically active radiation measurements and temperature. The melody of each measuremore » is six notes that describe the relative abundance of the Order Rickettsiales. The first note of each measure is from the relative abundance at a time point. The next five notes of a measure follow one of the following patterns: a continuous rise in pitch, a continuous drop in pitch, a rise then drop in pitch, or a drop then rise in pitch. These patterns are matched to the relative abundance of Rickettsiales at the given time point, relative to the previous and subsequent time points. The pattern of notes in a measure is mapped to the relative abundance of Rickettsiales with fewer rests per measure indicating higher abundance. For time points at which Rickettsiales was the most abundant microbial taxa, the corresponding measure is highlighted with a cymbal crash. More information at http://www.anl.gov/articles/songs-key... Image: Diatoms under a microscope: These tiny phytoplankton are encased within a silicate cell wall. Credit: Prof. Gordon T. Taylor, Stony Brook University« less

  7. [Associations of the Employment Status during the First 2 Years Following Medical Rehabilitation and Long Term Occupational Trajectories: Implications for Outcome Measurement].

    PubMed

    Holstiege, J; Kaluscha, R; Jankowiak, S; Krischak, G

    2017-02-01

    Study Objectives: The aim was to investigate the predictive value of the employment status measured in the 6 th , 12 th , 18 th and 24 th month after medical rehabilitation for long-term employment trajectories during 4 years. Methods: A retrospective study was conducted based on a 20%-sample of all patients receiving inpatient rehabilitation funded by the German pension fund. Patients aged <62 years who were treated due to musculoskeletal, cardiovascular or psychosomatic disorders during the years 2002-2005 were included and followed for 4 consecutive years. The predictive value of the employment status in 4 predefined months after discharge (6 th , 12 th , 18 th and 24 th month), for the total number of months in employment in 4 years following rehabilitative treatment was analyzed using multiple linear regression. Per time point, separate regression analyses were conducted, including the employment status (employed vs. unemployed) at the respective point in time as explanatory variable, besides a standard set of additional prognostic variables. Results: A total of 252 591 patients were eligible for study inclusion. The level of explained variance of the regression models increased with the point in time used to measure the employment status, included as explanatory variable. Overall the R²-measure increased by 30% from the regression model that included the employment status in the 6 th month (R²=0.60) to the model that included the work status in the 24 th month (R²=0.78). Conclusion: The degree of accuracy in the prognosis of long-term employment biographies increases with the point in time used to measure employment in the first 2 years following rehabilitation. These findings should be taken into consideration for the predefinition of time points used to measure the employment status in future studies. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Entropy of Movement Outcome in Space-Time.

    PubMed

    Lai, Shih-Chiung; Hsieh, Tsung-Yu; Newell, Karl M

    2015-07-01

    Information entropy of the joint spatial and temporal (space-time) probability of discrete movement outcome was investigated in two experiments as a function of different movement strategies (space-time, space, and time instructional emphases), task goals (point-aiming and target-aiming) and movement speed-accuracy constraints. The variance of the movement spatial and temporal errors was reduced by instructional emphasis on the respective spatial or temporal dimension, but increased on the other dimension. The space-time entropy was lower in targetaiming task than the point aiming task but did not differ between instructional emphases. However, the joint probabilistic measure of spatial and temporal entropy showed that spatial error is traded for timing error in tasks with space-time criteria and that the pattern of movement error depends on the dimension of the measurement process. The unified entropy measure of movement outcome in space-time reveals a new relation for the speed-accuracy.

  9. Cross-correlation of point series using a new method

    NASA Technical Reports Server (NTRS)

    Strothers, Richard B.

    1994-01-01

    Traditional methods of cross-correlation of two time series do not apply to point time series. Here, a new method, devised specifically for point series, utilizes a correlation measure that is based in the rms difference (or, alternatively, the median absolute difference) between nearest neightbors in overlapped segments of the two series. Error estimates for the observed locations of the points, as well as a systematic shift of one series with respect to the other to accommodate a constant, but unknown, lead or lag, are easily incorporated into the analysis using Monte Carlo techniques. A methodological restriction adopted here is that one series be treated as a template series against which the other, called the target series, is cross-correlated. To estimate a significance level for the correlation measure, the adopted alternative (null) hypothesis is that the target series arises from a homogeneous Poisson process. The new method is applied to cross-correlating the times of the greatest geomagnetic storms with the times of maximum in the undecennial solar activity cycle.

  10. Improving TWSTFT short-term stability by network time transfer.

    PubMed

    Tseng, Wen-Hung; Lin, Shinn-Yan; Feng, Kai-Ming; Fujieda, M; Maeno, H

    2010-01-01

    Two-way satellite time and frequency transfer (TWSTFT) is one of the major techniques to compare the atomic time scales between timing laboratories. As more and more TWSTFT measurements have been performed, the large number of point-to-point 2-way time transfer links has grown to be a complex network. For future improvement of the TWSTFT performance, it is important to reduce measurement noise of the TWSTFT results. One method is using TWSTFT network time transfer. The Asia-Pacific network is an exceptional case of simultaneous TWSTFT measurements. Some indirect links through relay stations show better shortterm stabilities than the direct link because the measurement noise may be neutralized in a simultaneous measurement network. In this paper, the authors propose a feasible method to improve the short-term stability by combining the direct and indirect links in the network. Through the comparisons of time deviation (TDEV), the results of network time transfer exhibit clear improved short-term stabilities. For the links used to compare 2 hydrogen masers, the average gain of TDEV at averaging times of 1 h is 22%. As TWSTFT short-term stability can be improved by network time transfer, the network may allow a larger number of simultaneously transmitting stations.

  11. Measurement of Dam Deformations: Case Study of Obruk Dam (Turkey)

    NASA Astrophysics Data System (ADS)

    Gulal, V. Engin; Alkan, R. Metin; Alkan, M. Nurullah; İlci, Veli; Ozulu, I. Murat; Tombus, F. Engin; Kose, Zafer; Aladogan, Kayhan; Sahin, Murat; Yavasoglu, Hakan; Oku, Guldane

    2016-04-01

    In the literature, there is information regarding the first deformation and displacement measurements in dams that were conducted in 1920s Switzerland. Todays, deformation measurements in the dams have gained very different functions with improvements in both measurement equipment and evaluation of measurements. Deformation measurements and analysis are among the main topics studied by scientists who take interest in the engineering measurement sciences. The Working group of Deformation Measurements and Analysis, which was established under the International Federation of Surveyors (FIG), carries out its studies and activities with regard to this subject. At the end of the 1970s, the subject of the determination of fixed points in the deformation monitoring network was one of the main subjects extensively studied. Many theories arose from this inquiry, as different institutes came to differing conclusions. In 1978, a special commission with representatives of universities has been established within the FIG 6.1 working group; this commission worked on the issue of determining a general approach to geometric deformation analysis. The results gleaned from the commission were discussed at symposiums organized by the FIG. In accordance with these studies, scientists interested in the subject have begun to work on models that investigate cause and effect relations between the effects that cause deformation and deformation. As of the scientist who interest with the issue focused on different deformation methods, another special commission was established within the FIG engineering measurements commission in order to classify deformation models and study terminology. After studying this material for a long time, the official commission report was published in 2001. In this prepared report, studies have been carried out by considering the FIG Engineering Surveying Commission's report entitled, 'MODELS AND TERMINOLOGY FOR THE ANALYSIS OF GEODETIC MONITORING OBSERVATIONS'. In October of 2015, geodetic deformation measurements were conducted by considering FIG reports related to deformation measurements and German DIN 18710 Engineering Measurements norms in the Çorum province of Turkey. The main purpose of the study is to determine optimum measurement and evaluation methods that will be used to specify movements in the horizontal and vertical directions for the fill dam. For this purpose; • In reference networks consisting of 8 points, measurements were performed by using long-term dual-frequency GNSS receivers for duration of 8 hours. • GNSS measurements were conducted in varying times between 30 minutes and 120 minutes at the 44 units object points on the body of the dam. • Two repetitive measurements of real time kinematic (RTK) GNSS were conducted at the object points on dam. • Geometric leveling measurements were performed between reference and object points. • Trigonometric leveling measurements were performed between reference and object points. • Polar measurements were performed between references and object points. GNSS measurements performed at reference points of the monitoring network for 8 hours have been evaluated by using GAMIT software in accordance with the IGS points in the region. In this manner, regional and local movements in the network can be determined. It is aimed to determine measurement period which will provide 1-2mm accuracy that expected in local GNSS network by evaluating GNSS measurements performed on body of dam. Results will be compared by offsetting GNSS and terrestrial measurements. This study will investigate whether or not there is increased accuracy provided by GNSS measurements carried out among reference points without the possibility of vision.

  12. Effect of spinal needle characteristics on measurement of spinal canal opening pressure.

    PubMed

    Bellamkonda, Venkatesh R; Wright, Thomas C; Lohse, Christine M; Keaveny, Virginia R; Funk, Eric C; Olson, Michael D; Laack, Torrey A

    2017-05-01

    A wide variety of spinal needles are used in clinical practice. Little is currently known regarding the impact of needle length, gauge, and tip type on the needle's ability to measure spinal canal opening pressure. This study aimed to investigate the relationship between these factors and the opening-pressure measurement or time to obtain an opening pressure. Thirteen distinct spinal needles, chosen to isolate the effects of length, gauge, and needle-point type, were prospectively tested on a lumbar puncture simulator. The key outcomes were the opening-pressure measurement and the time required to obtain that measure. Pressures were recorded at 10-s intervals until 3 consecutive, identical readings were observed. Time to measure opening pressure increased with increasing spinal needle length, increasing gauge, and the Quincke-type (cutting) point (P<0.001 for all). The time to measurement ranged from 30s to 530s, yet all needle types were able to obtain a consistent opening pressure measure. Although opening pressure estimates are unlikely to vary markedly by needle type, the time required to obtain the measurement increased with increasing needle length and gauge and with Quincke-type needles. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Reliability of Fitness Tests Using Methods and Time Periods Common in Sport and Occupational Management

    PubMed Central

    Burnstein, Bryan D.; Steele, Russell J.; Shrier, Ian

    2011-01-01

    Context: Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. Objective: To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Design: Cohort study. Setting: Eighteen different Cirque du Soleil shows. Patients or Other Participants: Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Intervention(s): Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. Main Outcome Measure(s): We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Results: Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Conclusions: Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators. PMID:22488138

  14. Sampling command generator corrects for noise and dropouts in recorded data

    NASA Technical Reports Server (NTRS)

    Anderson, T. O.

    1973-01-01

    Generator measures period between zero crossings of reference signal and accepts as correct timing points only those zero crossings which occur acceptably close to nominal time predicted from last accepted command. Unidirectional crossover points are used exclusively so errors from analog nonsymmetry of crossover detector are avoided.

  15. Experimental study of influence characteristics of flue gas fly ash on acid dew point

    NASA Astrophysics Data System (ADS)

    Song, Jinhui; Li, Jiahu; Wang, Shuai; Yuan, Hui; Ren, Zhongqiang

    2017-12-01

    The long-term operation experience of a large number of utility boilers shows that the measured value of acid dew point is generally lower than estimated value. This is because the influence of CaO and MgO on acid dew point in flue gas fly ash is not considered in the estimation formula of acid dew point. On the basis of previous studies, the experimental device for acid dew point measurement was designed and constructed, and the acid dew point under different smoke conditions was measured. The results show that the CaO and MgO in the flue gas fly ash have an obvious influence on the acid dew point, and the content of the fly ash is negatively correlated with the temperature of acid dew point At the same time, the concentration of H2SO4 in flue gas is different, and the acid dew point of flue gas is different, and positively correlated with the acid dew point.

  16. Quantitative FE-EPMA measurement of formation and inhibition of carbon contamination on Fe for trace carbon analysis.

    PubMed

    Tanaka, Yuji; Yamashita, Takako; Nagoshi, Masayasu

    2017-04-01

    Hydrocarbon contamination introduced during point, line and map analyses in a field emission electron probe microanalysis (FE-EPMA) was investigated to enable reliable quantitative analysis of trace amounts of carbon in steels. The increment of contamination on pure iron in point analysis is proportional to the number of iterations of beam irradiation, but not to the accumulated irradiation time. A combination of a longer dwell time and single measurement with a liquid nitrogen (LN2) trap as an anti-contamination device (ACD) is sufficient for a quantitative point analysis. However, in line and map analyses, contamination increases with irradiation time in addition to the number of iterations, even though the LN2 trap and a plasma cleaner are used as ACDs. Thus, a shorter dwell time and single measurement are preferred for line and map analyses, although it is difficult to eliminate the influence of contamination. While ring-like contamination around the irradiation point grows during electron-beam irradiation, contamination at the irradiation point increases during blanking time after irradiation. This can explain the increment of contamination in iterative point analysis as well as in line and map analyses. Among the ACDs, which are tested in this study, specimen heating at 373 K has a significant contamination inhibition effect. This technique makes it possible to obtain line and map analysis data with minimum influence of contamination. The above-mentioned FE-EPMA data are presented and discussed in terms of the contamination-formation mechanisms and the preferable experimental conditions for the quantification of trace carbon in steels. © The Author 2016. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Evaluation of 2-point, 3-point, and 6-point Dixon magnetic resonance imaging with flexible echo timing for muscle fat quantification.

    PubMed

    Grimm, Alexandra; Meyer, Heiko; Nickel, Marcel D; Nittka, Mathias; Raithel, Esther; Chaudry, Oliver; Friedberger, Andreas; Uder, Michael; Kemmler, Wolfgang; Quick, Harald H; Engelke, Klaus

    2018-06-01

    The purpose of this study is to evaluate and compare 2-point (2pt), 3-point (3pt), and 6-point (6pt) Dixon magnetic resonance imaging (MRI) sequences with flexible echo times (TE) to measure proton density fat fraction (PDFF) within muscles. Two subject groups were recruited (G1: 23 young and healthy men, 31 ± 6 years; G2: 50 elderly men, sarcopenic, 77 ± 5 years). A 3-T MRI system was used to perform Dixon imaging on the left thigh. PDFF was measured with six Dixon prototype sequences: 2pt, 3pt, and 6pt sequences once with optimal TEs (in- and opposed-phase echo times), lower resolution, and higher bandwidth (optTE sequences) and once with higher image resolution (highRes sequences) and shortest possible TE, respectively. Intra-fascia PDFF content was determined. To evaluate the comparability among the sequences, Bland-Altman analysis was performed. The highRes 6pt Dixon sequences served as reference as a high correlation of this sequence to magnetic resonance spectroscopy has been shown before. The PDFF difference between the highRes 6pt Dixon sequence and the optTE 6pt, both 3pt, and the optTE 2pt was low (between 2.2% and 4.4%), however, not to the highRes 2pt Dixon sequence (33%). For the optTE sequences, difference decreased with the number of echoes used. In conclusion, for Dixon sequences with more than two echoes, the fat fraction measurement was reliable with arbitrary echo times, while for 2pt Dixon sequences, it was reliable with dedicated in- and opposed-phase echo timing. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Motion data classification on the basis of dynamic time warping with a cloud point distance measure

    NASA Astrophysics Data System (ADS)

    Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad

    2016-06-01

    The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.

  19. Impact of Breakfasts (with or without Eggs) on Body Weight Regulation and Blood Lipids in University Students over a 14-Week Semester

    PubMed Central

    Rueda, Janice M.; Khosla, Pramod

    2013-01-01

    The effects of breakfast type on body weight and blood lipids were evaluated in university freshman. Seventy-three subjects were instructed to consume a breakfast with eggs (Egg Breakfast, EB, n = 39) or without (Non-Egg Breakfast, NEB, n = 34), five times/week for 14 weeks. Breakfast composition, anthropometric measurements and blood lipids were measured at multiple times. During the study, mean weight change was 1.6 ± 5.3 lbs (0.73 ± 2.41 kg), but there was no difference between groups. Both groups consumed similar calories for breakfast at all time-points. The EB group consumed significantly more calories at breakfast from protein, total fat and saturated fat, but significantly fewer calories from carbohydrate at every time-point. Cholesterol consumption at breakfast in the EB group was significantly higher than the NEB group at all time points. Breakfast food choices (other than eggs) were similar between groups. Blood lipids were similar between groups at all time points, indicating that the additional 400 mg/day of dietary cholesterol did not negatively impact blood lipids. PMID:24352089

  20. Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation

    NASA Astrophysics Data System (ADS)

    Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.

    2017-12-01

    When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.

  1. An experimental investigation of a three dimensional wall jet. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Catalano, G. D.

    1977-01-01

    One and two point statistical properties are measured in the flow fields of a coflowing turbulent jet. Two different confining surfaces (one flat, one with large curvature) are placed adjacent to the lip of the circular nozzle; and the resultant effects on the flow field are determined. The one point quantities measured include mean velocities, turbulent intensities, velocity and concentration autocorrelations and power spectral densities, and intermittencies. From the autocorrelation curves, the Taylor microscale and the integral length scale are calculated. Two point quantities measured include velocity and concentration space-time correlations and pressure velocity correlations. From the velocity space-time correlations, iso-correlation contours are constructed along with the lines of maximum maximorum. These lines allow a picture of the flow pattern to be determined. The pressures monitored in the pressure velocity correlations are measured both in the flow field and at the surface of the confining wall(s).

  2. Local pulse wave velocity estimated from small vibrations measured ultrasonically at multiple points on the arterial wall

    NASA Astrophysics Data System (ADS)

    Ito, Mika; Arakawa, Mototaka; Kanai, Hiroshi

    2018-07-01

    Pulse wave velocity (PWV) is used as a diagnostic criterion for arteriosclerosis, a major cause of heart disease and cerebrovascular disease. However, there are several problems with conventional PWV measurement techniques. One is that a pulse wave is assumed to only have an incident component propagating at a constant speed from the heart to the femoral artery, and another is that PWV is only determined from a characteristic time such as the rise time of the blood pressure waveform. In this study, we noninvasively measured the velocity waveform of small vibrations at multiple points on the carotid arterial wall using ultrasound. Local PWV was determined by analyzing the phase component of the velocity waveform by the least squares method. This method allowed measurement of the time change of the PWV at approximately the arrival time of the pulse wave, which discriminates the period when the reflected component is not contaminated.

  3. Improved UUV Positioning Using Acoustic Communications and a Potential for Real-Time Networking and Collaboration

    DTIC Science & Technology

    2017-06-01

    12 III. ACOUSTIC WAVE TRAVEL TIME ESTIMATION...Mission ...............................125 Table 8. Average Horizontal Distance from the UUV to the Reference Points when a Travel Time Measurement is...Taken ............................................126 Table 9. Average UUV Depth when a Travel Time Measurement is Taken .........126 Table 10. Ratio

  4. Optical multi-point measurements of the acoustic particle velocity with frequency modulated Doppler global velocimetry.

    PubMed

    Fischer, Andreas; König, Jörg; Haufe, Daniel; Schlüssler, Raimund; Büttner, Lars; Czarske, Jürgen

    2013-08-01

    To reduce the noise of machines such as aircraft engines, the development and propagation of sound has to be investigated. Since the applicability of microphones is limited due to their intrusiveness, contactless measurement techniques are required. For this reason, the present study describes an optical method based on the Doppler effect and its application for acoustic particle velocity (APV) measurements. While former APV measurements with Doppler techniques are point measurements, the applied system is capable of simultaneous measurements at multiple points. In its current state, the system provides linear array measurements of one component of the APV demonstrated by multi-tone experiments with tones up to 17 kHz for the first time.

  5. Validation of Accelerometer Cut-Points in Children With Cerebral Palsy Aged 4 to 5 Years.

    PubMed

    Keawutan, Piyapa; Bell, Kristie L; Oftedal, Stina; Davies, Peter S W; Boyd, Roslyn N

    2016-01-01

    To derive and validate triaxial accelerometer cut-points in children with cerebral palsy (CP) and compare these with previously established cut-points in children with typical development. Eighty-four children with CP aged 4 to 5 years wore the ActiGraph during a play-based gross motor function measure assessment that was video-taped for direct observation. Receiver operating characteristic and Bland-Altman plots were used for analyses. The ActiGraph had good classification accuracy in Gross Motor Function Classification System (GMFCS) levels III and V and fair classification accuracy in GMFCS levels I, II, and IV. These results support the use of the previously established cut-points for sedentary time of 820 counts per minute in children with CP aged 4 to 5 years across all functional abilities. The cut-point provides an objective measure of sedentary and active time in children with CP. The cut-point is applicable to group data but not for individual children.

  6. Standard of Practice and Flynn Effect Testimony in Death Penalty Cases

    ERIC Educational Resources Information Center

    Gresham, Frank M.; Reschly, Daniel J.

    2011-01-01

    The Flynn Effect is a well-established psychometric fact documenting substantial increases in measured intelligence test performance over time. Flynn's (1984) review of the literature established that Americans gain approximately 0.3 points per year or 3 points per decade in measured intelligence. The accurate assessment and interpretation of…

  7. Development of a Novel System to Measure a Clearance of a Passenger Platform

    NASA Astrophysics Data System (ADS)

    Shimizu, M.; Oizumi, J.; Matsuoka, R.; Takeda, H.; Okukura, H.; Ooya, A.; Koike, A.

    2016-06-01

    Clearances of a passenger platform at a railway station should be appropriately maintained for safety of both trains and passengers. In most Japanese railways clearances between a platform and a train car is measured precisely once or twice a year. Because current measurement systems operate on a track, the closure of the track is unavoidable. Since the procedure of the closure of a track is time-consuming and bothersome, we decided to develop a new system to measure clearances without the closure of a track. A new system is required to work on a platform and the required measurement accuracy is less than several millimetres. We have adopted a 3D laser scanner and stop-and-go operation for a new system. The current systems on a track measure clearances continuously at walking speed, while our system on a platform measures clearances at approximately ten metres intervals. The scanner controlled by a PC acquires a set of point data at each measuring station. Edge points of the platform, top and side points of two rails are detected from the acquired point data. Finally clearances of the platform are calculated by using the detected feature points of the platform and the rails. The results of an experiment using a prototype of our system show that the measurement accuracy by our system would be satisfactory, but our system would take more time than the current systems. Since our system requires no closure of a track, we conclude that our system would be convenient and effective.

  8. The assessment of lower face morphology changes in edentulous patients after prosthodontic rehabilitation, using two methods of measurement.

    PubMed

    Jivănescu, Anca; Bratu, Dana Cristina; Tomescu, Lucian; Măroiu, Alexandra Cristina; Popa, George; Bratu, Emanuel Adrian

    2015-01-01

    Using two measurement methods (a three-dimensional laser scanning system and a digital caliper), this study compares the lower face morphology of complete edentulous patients, before and after prosthodontic rehabilitation with bimaxillary complete dentures. Fourteen edentulous patients were randomly selected from the Department of Prosthodontics, at the Faculty of Dental Medicine, "Victor Babes" University of Medicine and Pharmacy, Timisoara, Romania. The changes that occurred in the lower third of the face after prosthodontic treatment were assessed quantitatively by measuring the vertical projection of the distances between two sets of anthropometric landmarks: Subnasale - cutaneous Pogonion (D1) and Labiale superius - Labiale inferius (D2). A two-way repeated measures ANOVA model design was carried out to test for significant interactions, main effects and differences between the two types of measuring devices and between the initial and final rehabilitation time points. The main effect of the type of measuring device showed no statistically significant differences in the measured distances (p=0.24 for D1 and p=0.39 for D2), between the initial and the final rehabilitation time points. Regarding the main effect of time, there were statistically significant differences in both the measured distances D1 and D2 (p=0.001), between the initial and the final rehabilitation time points. The two methods of measurement were equally reliable in the assessment of lower face morphology changes in edentulous patients after prosthodontic rehabilitation with bimaxillary complete dentures. The differences between the measurements taken before and after prosthodontic rehabilitation proved to be statistically significant.

  9. A model for incomplete longitudinal multivariate ordinal data.

    PubMed

    Liu, Li C

    2008-12-30

    In studies where multiple outcome items are repeatedly measured over time, missing data often occur. A longitudinal item response theory model is proposed for analysis of multivariate ordinal outcomes that are repeatedly measured. Under the MAR assumption, this model accommodates missing data at any level (missing item at any time point and/or missing time point). It allows for multiple random subject effects and the estimation of item discrimination parameters for the multiple outcome items. The covariates in the model can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is described utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher-scoring solution, which provides standard errors for all model parameters, is used. A data set from a longitudinal prevention study is used to motivate the application of the proposed model. In this study, multiple ordinal items of health behavior are repeatedly measured over time. Because of a planned missing design, subjects answered only two-third of all items at a given point. Copyright 2008 John Wiley & Sons, Ltd.

  10. Multivariate random regression analysis for body weight and main morphological traits in genetically improved farmed tilapia (Oreochromis niloticus).

    PubMed

    He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing

    2017-11-02

    Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.

  11. Sick building syndrome (SBS) and exposure to water-damaged buildings: time series study, clinical trial and mechanisms.

    PubMed

    Shoemaker, Ritchie C; House, Dennis E

    2006-01-01

    Occupants of water-damaged buildings (WDBs) with evidence of microbial amplification often describe a syndrome involving multiple organ systems, commonly referred to as "sick building syndrome" (SBS), following chronic exposure to the indoor air. Studies have demonstrated that the indoor air of WDBs often contains a complex mixture of fungi, mycotoxins, bacteria, endotoxins, antigens, lipopolysaccharides, and biologically produced volatile compounds. A case-series study with medical assessments at five time points was conducted to characterize the syndrome after a double-blinded, placebo-controlled clinical trial conducted among a group of study participants investigated the efficacy of cholestyramine (CSM) therapy. The general hypothesis of the time series study was that chronic exposure to the indoor air of WDBs is associated with SBS. Consecutive clinical patients were screened for diagnosis of SBS using criteria of exposure potential, symptoms involving at least five organ systems, and the absence of confounding factors. Twenty-eight cases signed voluntary consent forms for participation in the time-series study and provided samples of microbial contaminants from water-damaged areas in the buildings they occupied. Twenty-six participants with a group-mean duration of illness of 11 months completed examinations at all five study time points. Thirteen of those participants also agreed to complete a double-blinded, placebo-controlled clinical trial. Data from Time Point 1 indicated a group-mean of 23 out of 37 symptoms evaluated; and visual contrast sensitivity (VCS), an indicator of neurological function, was abnormally low in all participants. Measurements of matrix metalloproteinase 9 (MMP9), leptin, alpha melanocyte stimulating hormone (MSH), vascular endothelial growth factor (VEGF), immunoglobulin E (IgE), and pulmonary function were abnormal in 22, 13, 25, 14, 1, and 7 participants, respectively. Following 2 weeks of CSM therapy to enhance toxin elimination rates, measurements at Time Point 2 indicated group-means of 4 symptoms with 65% improvement in VCS at mid-spatial frequency-both statistically significant improvements relative to Time Point 1. Moderate improvements were seen in MMP9, leptin, and VEGF serum levels. The improvements in health status were maintained at Time Point 3 following a 2-week period during which CSM therapy was suspended and the participants avoid re-exposure to the WDBs. Participants reoccupied the respective WDBs for 3 days without CSM therapy, and all participants reported relapse at Time Point 4. The group-mean number of symptoms increased from 4 at Time Point 2 to 15 and VCS at mid-spatial frequency declined by 42%, both statistically significant differences relative to Time Point 2. Statistically significant differences in the group-mean levels of MMP9 and leptin relative to Time Point 2 were also observed. CSM therapy was reinstated for 2 weeks prior to assessments at Time Point 5. Measurements at Time Point 5 indicated group-means of 3 symptoms and a 69% increase in VCS, both results statistically different from those at Time Points 1 and 4. Optically corrected Snellen Distance Equivalent visual acuity scores did not vary significantly over the course of the study. Group-mean levels of MMP9 and leptin showed statistically significant improvement at Time Point 5 relative to Time Points 1 and 4, and the proportion of participants with abnormal VEGF levels was significantly lower at Time Point 5 than at Time Point 1. The number of participants at Time Point 5 with abnormal levels of MMP9, leptin, VEGF, and pulmonary function were 10, 10, 9, and 7, respectively. The level of IgE was not re-measured because of the low incidence of abnormality at Time Point 1, and MSH was not re-measured because previously published data indicated a long time course for MSH improvement. The results from the time series study supported the general study hypothesis that exposure to the indoor air of WDBs is associated with SBS. High levels of MMP9 indicated that exposure to the complex mixture of substances in the indoor air of the WDBs triggered a pro-inflammatory cytokine response. A model describing modes of action along a pathway leading to biotoxin-associated illness is presented to organize current knowledge into testable hypotheses. The model links an inflammatory response with tissue hypoxia, as indicated by abnormal levels of VEGF, and disruption of the proopiomelanocortin pathway in the hypothalamus, as evidenced by abnormalities in leptin and MSH levels. Results from the clinical trial on CSM efficacy indicated highly significant improvement in group-mean number of symptoms and VCS scores relative to baseline in the 7 participants randomly assigned to receive 2 weeks of CSM therapy, but no improvement in the 6 participants assigned placebo therapy during that time interval. However, those 6 participants also showed a highly significant improvement in group-mean number of symptoms and VCS scores relative to baseline following a subsequent 2-week period of CSM therapy. Because the only known benefit of CSM therapy is to enhance the elimination rates of substances that accumulate in bile by preventing re-absorption during enterohepatic re-circulation, results from the clinical trial also supported the general study hypothesis that SBS is associated with exposure to WDBs because the only relevant function of CSM is to bind and remove toxigenic compounds. Only research that focuses on the signs, symptoms, and biochemical markers of patients with persistent illness following acute and/or chronic exposure to WDBs can further the development of the model describing modes of action in the biotoxin-associated pathway and guide the development of innovative and efficacious therapeutic interventions.

  12. Estimating a Meaningful Point of Change: A Comparison of Exploratory Techniques Based on Nonparametric Regression

    ERIC Educational Resources Information Center

    Klotsche, Jens; Gloster, Andrew T.

    2012-01-01

    Longitudinal studies are increasingly common in psychological research. Characterized by repeated measurements, longitudinal designs aim to observe phenomena that change over time. One important question involves identification of the exact point in time when the observed phenomena begin to meaningfully change above and beyond baseline…

  13. Parsimonious model for blood glucose level monitoring in type 2 diabetes patients.

    PubMed

    Zhao, Fang; Ma, Yan Fen; Wen, Jing Xiao; DU, Yan Fang; Li, Chun Lin; Li, Guang Wei

    2014-07-01

    To establish the parsimonious model for blood glucose monitoring in patients with type 2 diabetes receiving oral hypoglycemic agent treatment. One hundred and fifty-nine adult Chinese type 2 diabetes patients were randomized to receive rapid-acting or sustained-release gliclazide therapy for 12 weeks. Their blood glucose levels were measured at 10 time points in a 24 h period before and after treatment, and the 24 h mean blood glucose levels were measured. Contribution of blood glucose levels to the mean blood glucose level and HbA1c was assessed by multiple regression analysis. The correlation coefficients of blood glucose level measured at 10 time points to the daily MBG were 0.58-0.74 and 0.59-0.79, respectively, before and after treatment (P<0.0001). The multiple stepwise regression analysis showed that the blood glucose levels measured at 6 of the 10 time points could explain 95% and 97% of the changes in MBG before and after treatment. The three blood glucose levels, which were measured at fasting, 2 h after breakfast and before dinner, of the 10 time points could explain 84% and 86% of the changes in MBG before and after treatment, but could only explain 36% and 26% of the changes in HbA1c before and after treatment, and they had a poorer correlation with the HbA1c than with the 24 h MBG. The blood glucose levels measured at fasting, 2 h after breakfast and before dinner truly reflected the change 24 h blood glucose level, suggesting that they are appropriate for the self-monitoring of blood glucose levels in diabetes patients receiving oral anti-diabetes therapy. Copyright © 2014 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  14. Wavenumber-frequency Spectra of Pressure Fluctuations Measured via Fast Response Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Panda, J.; Roozeboom, N. H.; Ross, J. C.

    2016-01-01

    The recent advancement in fast-response Pressure-Sensitive Paint (PSP) allows time-resolved measurements of unsteady pressure fluctuations from a dense grid of spatial points on a wind tunnel model. This capability allows for direct calculations of the wavenumber-frequency (k-?) spectrum of pressure fluctuations. Such data, useful for the vibro-acoustics analysis of aerospace vehicles, are difficult to obtain otherwise. For the present work, time histories of pressure fluctuations on a flat plate subjected to vortex shedding from a rectangular bluff-body were measured using PSP. The light intensity levels in the photographic images were then converted to instantaneous pressure histories by applying calibration constants, which were calculated from a few dynamic pressure sensors placed at selective points on the plate. Fourier transform of the time-histories from a large number of spatial points provided k-? spectra for pressure fluctuations. The data provides first glimpse into the possibility of creating detailed forcing functions for vibro-acoustics analysis of aerospace vehicles, albeit for a limited frequency range.

  15. High-precision pointing with the Sardinia Radio Telescope

    NASA Astrophysics Data System (ADS)

    Poppi, Sergio; Pernechele, Claudio; Pisanu, Tonino; Morsiani, Marco

    2010-07-01

    We present here the systems aimed to measure and minimize the pointing errors for the Sardinia Radio Telescope: they consist of an optical telescope to measure errors due to the mechanical structure deformations and a lasers system for the errors due to the subreflector displacement. We show here the results of the tests that we have done on the Medicina 32 meters VLBI radio telescope. The measurements demonstrate we can measure the pointing errors of the mechanical structure, with an accuracy of about ~1 arcsec. Moreover, we show the technique to measure the displacement of the subreflector, placed in the SRT at 22 meters from the main mirror, within +/-0.1 mm from its optimal position. These measurements show that we can obtain the needed accuracy to correct also the non repeatable pointing errors, which arise on time scale varying from seconds to minutes.

  16. Comparison of the effects of firocoxib, carprofen and vedaprofen in a sodium urate crystal induced synovitis model of arthritis in dogs.

    PubMed

    Hazewinkel, Herman A W; van den Brom, Walter E; Theyse, Lars F H; Pollmeier, Matthias; Hanson, Peter D

    2008-02-01

    A randomized, placebo-controlled, four-period cross-over laboratory study involving eight dogs was conducted to confirm the effective analgesic dose of firocoxib, a selective COX-2 inhibitor, in a synovitis model of arthritis. Firocoxib was compared to vedaprofen and carprofen, and the effect, defined as a change in weight bearing measured via peak ground reaction, was evaluated at treatment dose levels. A lameness score on a five point scale was also assigned to the affected limb. Peak vertical ground reaction force was considered to be the most relevant measurement in this study. The firocoxib treatment group performed significantly better than placebo at the 3 h post-treatment time point and significantly better than placebo and carprofen at the 7 h post-treatment time point. Improvement in lameness score was also significantly better in the dogs treated with firocoxib than placebo and carprofen at both the 3 and 7 h post-treatment time points.

  17. Exploring the utility of measures of critical thinking dispositions and professional behavior development in an audiology education program.

    PubMed

    Ng, Stella L; Bartlett, Doreen J; Lucy, S Deborah

    2013-05-01

    Discussions about professional behaviors are growing increasingly prevalent across health professions, especially as a central component to education programs. A strong critical thinking disposition, paired with critical consciousness, may provide future health professionals with a foundation for solving challenging practice problems through the application of sound technical skill and scientific knowledge without sacrificing sensitive, empathic, client-centered practice. In this article, we describe an approach to monitoring student development of critical thinking dispositions and key professional behaviors as a way to inform faculty members' and clinical supervisors' support of students and ongoing curriculum development. We designed this exploratory study to describe the trajectory of change for a cohort of audiology students' critical thinking dispositions (measured by the California Critical Thinking Disposition Inventory: [CCTDI]) and professional behaviors (using the Comprehensive Professional Behaviors Development Log-Audiology [CPBDL-A]) in an audiology program. Implications for the CCTDI and CPBDL-A in audiology entry-to-practice curricula and professional development will be discussed. This exploratory study involved a cohort of audiology students, studied over a two-year period, using a one-group repeated measures design. Eighteen audiology students (two male and 16 female), began the study. At the third and final data collection point, 15 students completed the CCTDI, and nine students completed the CPBDL-A. The CCTDI and CPBDL-A were each completed at three time points: at the beginning, at the middle, and near the end of the audiology education program. Data are presented descriptively in box plots to examine the trends of development for each critical thinking disposition dimension and each key professional behavior as well as for an overall critical thinking disposition score. For the CCTDI, there was a general downward trend from time point 1 to time point 2 and a general upward trend from time point 2 to time point 3. Students demonstrated upward trends from the initial to final time point for their self-assessed development of professional behaviors as indicated on the CPBDL-A. The CCTDI and CPBDL-A can be used by audiology education programs as mechanisms for inspiring, fostering, and monitoring the development of critical thinking dispositions and key professional behaviors in students. Feedback and mentoring about dispositions and behaviors in conjunction with completion of these measures is recommended for inspiring and fostering these key professional attributes. American Academy of Audiology.

  18. Simultaneous measurement of passage through the restriction point and MCM loading in single cells

    PubMed Central

    Håland, T. W.; Boye, E.; Stokke, T.; Grallert, B.; Syljuåsen, R. G.

    2015-01-01

    Passage through the Retinoblastoma protein (RB1)-dependent restriction point and the loading of minichromosome maintenance proteins (MCMs) are two crucial events in G1-phase that help maintain genome integrity. Deregulation of these processes can cause uncontrolled proliferation and cancer development. Both events have been extensively characterized individually, but their relative timing and inter-dependence remain less clear. Here, we describe a novel method to simultaneously measure MCM loading and passage through the restriction point. We exploit that the RB1 protein is anchored in G1-phase but is released when hyper-phosphorylated at the restriction point. After extracting cells with salt and detergent before fixation we can simultaneously measure, by flow cytometry, the loading of MCMs onto chromatin and RB1 binding to determine the order of the two events in individual cells. We have used this method to examine the relative timing of the two events in human cells. Whereas in BJ fibroblasts released from G0-phase MCM loading started mainly after the restriction point, in a significant fraction of exponentially growing BJ and U2OS osteosarcoma cells MCMs were loaded in G1-phase with RB1 anchored, demonstrating that MCM loading can also start before the restriction point. These results were supported by measurements in synchronized U2OS cells. PMID:26250117

  19. Application of a multi-beam vibrometer on industrial components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bendel, Karl

    2014-05-27

    Laser Doppler vibrometry is a well proven tool for the non-contact measurement of vibration. The scanning of several measurement points allows to visualize the deflection shape of the component, ideally a 3D-operating deflection shape, if a 3-D scanner is applied. Measuring the points sequentially, however, requires stationary behavior during the measurement time. This cannot be guaranteed for many real objects. Therefore, a multipoint laser Doppler vibrometer has been developed by Polytec and the University of Stuttgart with Bosch as industrial partner. A short description of the measurement system is given. Applications for the parallel measurement of the vibration of severalmore » points are shown for non-stationary vibrating Bosch components such as power-tools or valves.« less

  20. Ensemble Space-Time Correlation of Plasma Turbulence in the Solar Wind.

    PubMed

    Matthaeus, W H; Weygand, J M; Dasso, S

    2016-06-17

    Single point measurement turbulence cannot distinguish variations in space and time. We employ an ensemble of one- and two-point measurements in the solar wind to estimate the space-time correlation function in the comoving plasma frame. The method is illustrated using near Earth spacecraft observations, employing ACE, Geotail, IMP-8, and Wind data sets. New results include an evaluation of both correlation time and correlation length from a single method, and a new assessment of the accuracy of the familiar frozen-in flow approximation. This novel view of the space-time structure of turbulence may prove essential in exploratory space missions such as Solar Probe Plus and Solar Orbiter for which the frozen-in flow hypothesis may not be a useful approximation.

  1. Nonlinear digital signal processing in mental health: characterization of major depression using instantaneous entropy measures of heartbeat dynamics.

    PubMed

    Valenza, Gaetano; Garcia, Ronald G; Citi, Luca; Scilingo, Enzo P; Tomaz, Carlos A; Barbieri, Riccardo

    2015-01-01

    Nonlinear digital signal processing methods that address system complexity have provided useful computational tools for helping in the diagnosis and treatment of a wide range of pathologies. More specifically, nonlinear measures have been successful in characterizing patients with mental disorders such as Major Depression (MD). In this study, we propose the use of instantaneous measures of entropy, namely the inhomogeneous point-process approximate entropy (ipApEn) and the inhomogeneous point-process sample entropy (ipSampEn), to describe a novel characterization of MD patients undergoing affective elicitation. Because these measures are built within a nonlinear point-process model, they allow for the assessment of complexity in cardiovascular dynamics at each moment in time. Heartbeat dynamics were characterized from 48 healthy controls and 48 patients with MD while emotionally elicited through either neutral or arousing audiovisual stimuli. Experimental results coming from the arousing tasks show that ipApEn measures are able to instantaneously track heartbeat complexity as well as discern between healthy subjects and MD patients. Conversely, standard heart rate variability (HRV) analysis performed in both time and frequency domains did not show any statistical significance. We conclude that measures of entropy based on nonlinear point-process models might contribute to devising useful computational tools for care in mental health.

  2. Smoke-Point Properties of Non-Buoyant Round Laminar Jet Diffusion Flames. Appendix J

    NASA Technical Reports Server (NTRS)

    Urban, D. L.; Yuan, Z.-G.; Sunderland, P. B.; Lin, K.-C.; Dai, Z.; Faeth, G. M.

    2000-01-01

    The laminar smoke-point properties of non-buoyant round laminar jet diffusion flames were studied emphasizing results from long-duration (100-230 s) experiments at microgravity carried out in orbit aboard the space shuttle Columbia. Experimental conditions included ethylene- and propane-fueled flames burning in still air at an ambient temperature of 300 K, pressures of 35-130 kPa, jet exit diameters of 1.6 and 2.7 mm, jet exit velocities of 170-690 mm/s, jet exit Reynolds numbers of 46-172, characteristic flame residence times of 40-302 ms, and luminous flame lengths of 15-63 mm. Contrary to the normal-gravity laminar smoke point, in microgravity, the onset of laminar smoke-point conditions involved two flame configurations: closed-tip flames with soot emissions along the flame axis and open-tip flames with soot emissions from an annular ring about the flame axis. Open-tip flames were observed at large characteristic flame residence times with the onset of soot emissions associated with radiative quenching near the flame tip: nevertheless, unified correlations of laminar smoke-point properties were obtained that included both flame configurations. Flame lengths at laminar smoke-point conditions were well correlated in terms of a corrected fuel flow rate suggested by a simplified analysis of flame shape. The present steady and non-buoyant flames emitted soot more readily than non-buoyant flames in earlier tests using ground-based microgravity facilities and than buoyant flames at normal gravity, as a result of reduced effects of unsteadiness, flame disturbances, and buoyant motion. For example, present measurements of laminar smoke-point flame lengths at comparable conditions were up to 2.3 times shorter than ground-based microgravity measurements and up to 6.4 times shorter than buoyant flame measurements. Finally, present laminar smoke-point flame lengths were roughly inversely proportional to pressure to a degree that is a somewhat smaller than observed during earlier tests both at microgravity (using ground-based facilities) and at normal gravity.

  3. Childhood Socioeconomic Status and Stress in Late Adulthood: A Longitudinal Approach to Measuring Allostatic Load

    PubMed Central

    Nowakowski, Alexandra C. H.

    2017-01-01

    Objectives: This study examines how the effects of childhood socioeconomic status (SES) may carry on into late adulthood. Methods: We examine how childhood SES affects both perceived stress and allostatic load, which is a cumulative measure of the body’s biologic response to chronic stress. We use the National Social Life, Health, and Aging Project, Waves 1 and 2, and suggest a novel method of incorporating a longitudinal allostatic load measure. Results: Individuals who grew up in low SES households have higher allostatic load scores in late adulthood, and this association is mediated mostly by educational attainment. Discussion: The longitudinal allostatic load measure shows similar results to the singular measures and allows us to include 2 time points into one outcome measure. Incorporating 2 separate time points into one measure is important because allostatic load is a measure of cumulative physiological dysregulation, and longitudinal data provide a more comprehensive measure. PMID:29226194

  4. The Stability of Perceived Pubertal Timing across Adolescence

    PubMed Central

    Cance, Jessica Duncan; Ennett, Susan T.; Morgan-Lopez, Antonio A.; Foshee, Vangie A.

    2011-01-01

    It is unknown whether perceived pubertal timing changes as puberty progresses or whether it is an important component of adolescent identity formation that is fixed early in pubertal development. The purpose of this study is to examine the stability of perceived pubertal timing among a school-based sample of rural adolescents aged 11 to 17 (N=6,425; 50% female; 53% White). Two measures of pubertal timing were used, stage-normative, based on the Pubertal Development Scale, a self-report scale of secondary sexual characteristics, and peer-normative, a one-item measure of perceived pubertal timing. Two longitudinal methods were used: one-way random effects ANOVA models and latent class analysis. When calculating intraclass correlation coefficients using the one-way random effects ANOVA models, which is based on the average reliability from one time point to the next, both measures had similar, but poor, stability. In contrast, latent class analysis, which looks at the longitudinal response pattern of each individual and treats deviation from that pattern as measurement error, showed three stable and distinct response patterns for both measures: always early, always on-time, and always late. Study results suggest instability in perceived pubertal timing from one age to the next, but this instability is likely due to measurement error. Thus, it may be necessary to take into account the longitudinal pattern of perceived pubertal timing across adolescence rather than measuring perceived pubertal timing at one point in time. PMID:21983873

  5. Hydrogenation and interesterification effects on the oxidative stability and melting point of soybean oil.

    PubMed

    Daniels, Roger L; Kim, Hyun Jung; Min, David B

    2006-08-09

    Soybean oil with an iodine value of 136 was hydrogenated to have iodine values of 126 and 117. The soybean oils with iodine values of 136, 126, and 117 were randomly interesterified using sodium methoxide. The oxidative stabilities of the hydrogenated and/or interesterified soybean oils were evaluated by measuring the headspace oxygen content by gas chromatography, and the induction time was measured using Rancimat. The melting points of the oils were evaluated by differential scanning calorimetry. Duncan's multiple range test of the headspace oxygen and induction time showed that hydrogenation increased the headspace oxygen content and induction time at alpha = 0.05. Interesterification decreased the headspace oxygen and the induction time for the soybean oils with iodine values of 136, 126, and 117 at alpha = 0.05. Hydrogenation increased the melting points as the iodine value decreased from 136 and 126 to 117 at alpha = 0.05. The random interesterification increased the melting points of soybean oils with iodine values of 136, 126, and 117 at alpha = 0.05. The combined effects of hydrogenation and interesterification increased the oxidative stability of soybean oil at alpha = 0.05 and the melting point at alpha = 0.01. The optimum combination of hydrogenation and random interesterification can improve the oxidative stability and increase the melting point to expand the application of soybean oil in foods.

  6. Photoacoustic infrared spectroscopy for conducting gas tracer tests and measuring water saturations in landfills

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Yoojin; Han, Byunghyun; Mostafid, M. Erfan

    2012-02-15

    Highlights: Black-Right-Pointing-Pointer Photoacoustic infrared spectroscopy tested for measuring tracer gas in landfills. Black-Right-Pointing-Pointer Measurement errors for tracer gases were 1-3% in landfill gas. Black-Right-Pointing-Pointer Background signals from landfill gas result in elevated limits of detection. Black-Right-Pointing-Pointer Technique is much less expensive and easier to use than GC. - Abstract: Gas tracer tests can be used to determine gas flow patterns within landfills, quantify volatile contaminant residence time, and measure water within refuse. While gas chromatography (GC) has been traditionally used to analyze gas tracers in refuse, photoacoustic spectroscopy (PAS) might allow real-time measurements with reduced personnel costs and greater mobilitymore » and ease of use. Laboratory and field experiments were conducted to evaluate the efficacy of PAS for conducting gas tracer tests in landfills. Two tracer gases, difluoromethane (DFM) and sulfur hexafluoride (SF{sub 6}), were measured with a commercial PAS instrument. Relative measurement errors were invariant with tracer concentration but influenced by background gas: errors were 1-3% in landfill gas but 4-5% in air. Two partitioning gas tracer tests were conducted in an aerobic landfill, and limits of detection (LODs) were 3-4 times larger for DFM with PAS versus GC due to temporal changes in background signals. While higher LODs can be compensated by injecting larger tracer mass, changes in background signals increased the uncertainty in measured water saturations by up to 25% over comparable GC methods. PAS has distinct advantages over GC with respect to personnel costs and ease of use, although for field applications GC analyses of select samples are recommended to quantify instrument interferences.« less

  7. Development of a new densimeter for the combined investigation of dew-point densities and sorption phenomena of fluid mixtures

    NASA Astrophysics Data System (ADS)

    Moritz, Katharina; Kleinrahm, Reiner; McLinden, Mark O.; Richter, Markus

    2017-12-01

    For the determination of dew-point densities and pressures of fluid mixtures, a new densimeter has been developed. The new apparatus is based on the well-established two-sinker density measurement principle with the additional capability of quantifying sorption effects. In the vicinity of the dew line, such effects cause a change in composition of the gas mixture under study, which can significantly distort accurate density measurements. The new experimental technique enables the accurate measurement of dew-point densities and pressures and the quantification of sorption effects at the same time.

  8. Accuracy and optimal timing of activity measurements in estimating the absorbed dose of radioiodine in the treatment of Graves' disease

    NASA Astrophysics Data System (ADS)

    Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.

    2011-02-01

    Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.

  9. More than just tracking time: Complex measures of user engagement with an internet-based health promotion intervention.

    PubMed

    Baltierra, Nina B; Muessig, Kathryn E; Pike, Emily C; LeGrand, Sara; Bull, Sheana S; Hightow-Weidman, Lisa B

    2016-02-01

    There has been a rise in internet-based health interventions without a concomitant focus on new methods to measure user engagement and its effect on outcomes. We describe current user tracking methods for internet-based health interventions and offer suggestions for improvement based on the design and pilot testing of healthMpowerment.org (HMP). HMP is a multi-component online intervention for young Black men and transgender women who have sex with men (YBMSM/TW) to reduce risky sexual behaviors, promote healthy living and build social support. The intervention is non-directive, incorporates interactive features, and utilizes a point-based reward system. Fifteen YBMSM/TW (age 20-30) participated in a one-month pilot study to test the usability and efficacy of HMP. Engagement with the intervention was tracked using a customized data capture system and validated with Google Analytics. Usage was measured in time spent (total and across sections) and points earned. Average total time spent on HMP was five hours per person (range 0-13). Total time spent was correlated with total points earned and overall site satisfaction. Measuring engagement in internet-based interventions is crucial to determining efficacy. Multiple methods of tracking helped derive more comprehensive user profiles. Results highlighted the limitations of measures to capture user activity and the elusiveness of the concept of engagement. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Measurement of radioactivity concentration in blood by using newly developed ToT LuAG-APD based small animal PET tomograph.

    PubMed

    Malik, Azhar H; Shimazoe, Kenji; Takahashi, Hiroyuki

    2013-01-01

    In order to obtain plasma time activity curve (PTAC), input function for almost all quantitative PET studies, patient blood is sampled manually from the artery or vein which has various drawbacks. Recently a novel compact Time over Threshold (ToT) based Pr:LuAG-APD animal PET tomograph is developed in our laboratory which has 10% energy resolution, 4.2 ns time resolution and 1.76 mm spatial resolution. The measured value of spatial resolution shows much promise for imaging the blood vascular, i.e; artery of diameter 2.3-2.4mm, and hence, to measure PTAC for quantitative PET studies. To find the measurement time required to obtain reasonable counts for image reconstruction, the most important parameter is the sensitivity of the system. Usually small animal PET systems are characterized by using a point source in air. We used Electron Gamma Shower 5 (EGS5) code to simulate a point source at different positions inside the sensitive volume of tomograph and the axial and radial variations in the sensitivity are studied in air and phantom equivalent water cylinder. An average sensitivity difference of 34% in axial direction and 24.6% in radial direction is observed when point source is displaced inside water cylinder instead of air.

  11. Observed Measures of Negative Parenting Predict Brain Development during Adolescence.

    PubMed

    Whittle, Sarah; Vijayakumar, Nandita; Dennison, Meg; Schwartz, Orli; Simmons, Julian G; Sheeber, Lisa; Allen, Nicholas B

    2016-01-01

    Limited attention has been directed toward the influence of non-abusive parenting behaviour on brain structure in adolescents. It has been suggested that environmental influences during this period are likely to impact the way that the brain develops over time. The aim of this study was to investigate the association between aggressive and positive parenting behaviors on brain development from early to late adolescence, and in turn, psychological and academic functioning during late adolescence, using a multi-wave longitudinal design. Three hundred and sixty seven magnetic resonance imaging (MRI) scans were obtained over three time points from 166 adolescents (11-20 years). At the first time point, observed measures of maternal aggressive and positive behaviors were obtained. At the final time point, measures of psychological and academic functioning were obtained. Results indicated that a higher frequency of maternal aggressive behavior was associated with alterations in the development of right superior frontal and lateral parietal cortical thickness, and of nucleus accumbens volume, in males. Development of the superior frontal cortex in males mediated the relationship between maternal aggressive behaviour and measures of late adolescent functioning. We suggest that our results support an association between negative parenting and adolescent functioning, which may be mediated by immature or delayed brain maturation.

  12. Observed Measures of Negative Parenting Predict Brain Development during Adolescence

    PubMed Central

    Whittle, Sarah; Vijayakumar, Nandita; Dennison, Meg; Schwartz, Orli; Simmons, Julian G.; Sheeber, Lisa; Allen, Nicholas B.

    2016-01-01

    Limited attention has been directed toward the influence of non-abusive parenting behaviour on brain structure in adolescents. It has been suggested that environmental influences during this period are likely to impact the way that the brain develops over time. The aim of this study was to investigate the association between aggressive and positive parenting behaviors on brain development from early to late adolescence, and in turn, psychological and academic functioning during late adolescence, using a multi-wave longitudinal design. Three hundred and sixty seven magnetic resonance imaging (MRI) scans were obtained over three time points from 166 adolescents (11–20 years). At the first time point, observed measures of maternal aggressive and positive behaviors were obtained. At the final time point, measures of psychological and academic functioning were obtained. Results indicated that a higher frequency of maternal aggressive behavior was associated with alterations in the development of right superior frontal and lateral parietal cortical thickness, and of nucleus accumbens volume, in males. Development of the superior frontal cortex in males mediated the relationship between maternal aggressive behaviour and measures of late adolescent functioning. We suggest that our results support an association between negative parenting and adolescent functioning, which may be mediated by immature or delayed brain maturation. PMID:26824348

  13. Hydraulic fracturing stress measurement in underground salt rock mines at Upper Kama Deposit

    NASA Astrophysics Data System (ADS)

    Rubtsova, EV; Skulkin, AA

    2018-03-01

    The paper reports the experimental results on hydraulic fracturing in-situ stress measurements in potash mines of Uralkali. The selected HF procedure, as well as locations and designs of measuring points are substantiated. From the evidence of 78 HF stress measurement tests at eight measuring points, it has been found that the in-situ stress field is nonequicomponent, with the vertical stresses having value close to the estimates obtained with respect to the overlying rock weight while the horizontal stresses exceed the gravity stresses by 2–3 times.

  14. Distributed optical fiber temperature sensor (DOFTS) system applied to automatic temperature alarm of coal mine and tunnel

    NASA Astrophysics Data System (ADS)

    Zhang, Zaixuan; Wang, Kequan; Kim, Insoo S.; Wang, Jianfeng; Feng, Haiqi; Guo, Ning; Yu, Xiangdong; Zhou, Bangquan; Wu, Xiaobiao; Kim, Yohee

    2000-05-01

    The DOFTS system that has applied to temperature automatically alarm system of coal mine and tunnel has been researched. It is a real-time, on line and multi-point measurement system. The wavelength of LD is 1550 nm, on the 6 km optical fiber, 3000 points temperature signal is sampled and the spatial position is certain. Temperature measured region: -50 degree(s)C--100 degree(s)C; measured uncertain value: +/- 3 degree(s)C; temperature resolution: 0.1 degree(s)C; spatial resolution: <5 cm (optical fiber sensor probe); <8 m (spread optical fiber); measured time: <70 s. In the paper, the operated principles, underground test, test content and practical test results have been discussed.

  15. Measurements of tungsten migration in the DIII-D divertor

    NASA Astrophysics Data System (ADS)

    Wampler, W. R.; Rudakov, D. L.; Watkins, J. G.; McLean, A. G.; Unterberg, E. A.; Stangeby, P. C.

    2017-12-01

    An experimental study of migration of tungsten in the DIII-D divertor is described, in which the outer strike point of L-mode plasmas was positioned on a toroidal ring of tungsten-coated metal inserts. Net deposition of tungsten on the divertor just outside the strike point was measured on graphite samples exposed to various plasma durations using the divertor materials evaluation system. Tungsten coverage, measured by Rutherford backscattering spectroscopy (RBS), was found to be low and nearly independent of both radius and exposure time closer to the strike point, whereas farther from the strike point the W coverage was much larger and increased with exposure time. Depth profiles from RBS show this was due to accumulation of thicker mixed-material deposits farther from the strike point where the plasma temperature is lower. These results are consistent with a low near-surface steady-state coverage on graphite undergoing net erosion, and continuing accumulation in regions of net deposition. This experiment provides data needed to validate, and further improve computational simulations of erosion and deposition of material on plasma-facing components and transport of impurities in magnetic fusion devices. Such simulations are underway and will be reported later.

  16. Longitudinal employment outcomes of an early intervention vocational rehabilitation service for people admitted to rehabilitation with a traumatic spinal cord injury.

    PubMed

    Hilton, G; Unsworth, C A; Murphy, G C; Browne, M; Olver, J

    2017-08-01

    Longitudinal cohort design. First, to explore the longitudinal outcomes for people who received early intervention vocational rehabilitation (EIVR); second, to examine the nature and extent of relationships between contextual factors and employment outcomes over time. Both inpatient and community-based clients of a Spinal Community Integration Service (SCIS). People of workforce age undergoing inpatient rehabilitation for traumatic spinal cord injury were invited to participate in EIVR as part of SCIS. Data were collected at the following three time points: discharge and at 1 year and 2+ years post discharge. Measures included the spinal cord independence measure, hospital anxiety and depression scale, impact on participation and autonomy scale, numerical pain-rating scale and personal wellbeing index. A range of chi square, correlation and regression tests were undertaken to look for relationships between employment outcomes and demographic, emotional and physical characteristics. Ninety-seven participants were recruited and 60 were available at the final time point where 33% (95% confidence interval (CI): 24-42%) had achieved an employment outcome. Greater social participation was strongly correlated with wellbeing (ρ=0.692), and reduced anxiety (ρ=-0.522), depression (ρ=-0.643) and pain (ρ=-0.427) at the final time point. In a generalised linear mixed effect model, education status, relationship status and subjective wellbeing increased significantly the odds of being employed at the final time point. Tertiary education prior to injury was associated with eight times increased odds of being in employment at the final time point; being in a relationship at the time of injury was associated with increased odds of being in employment of more than 3.5; subjective wellbeing, while being the least powerful predictor was still associated with increased odds (1.8 times) of being employed at the final time point. EIVR shows promise in delivering similar return-to-work rates as those traditionally reported, but sooner. The dynamics around relationships, subjective wellbeing, social participation and employment outcomes require further exploration.

  17. Correlation of VHI-10 to voice laboratory measurements across five common voice disorders.

    PubMed

    Gillespie, Amanda I; Gooding, William; Rosen, Clark; Gartner-Schmidt, Jackie

    2014-07-01

    To correlate change in Voice Handicap Index (VHI)-10 scores with corresponding voice laboratory measures across five voice disorders. Retrospective study. One hundred fifty patients aged >18 years with primary diagnosis of vocal fold lesions, primary muscle tension dysphonia-1, atrophy, unilateral vocal fold paralysis (UVFP), and scar. For each group, participants with the largest change in VHI-10 between two periods (TA and TB) were selected. The dates of the VHI-10 values were linked to corresponding acoustic/aerodynamic and audio-perceptual measures. Change in voice laboratory values were analyzed for correlation with each other and with VHI-10. VHI-10 scores were greater for patients with UVFP than other disorders. The only disorder-specific correlation between voice laboratory measure and VHI-10 was average phonatory airflow in speech for patients with UVFP. Average airflow in repeated phonemes was strongly correlated with average airflow in speech (r=0.75). Acoustic measures did not significantly change between time points. The lack of correlations between the VHI-10 change scores and voice laboratory measures may be due to differing constructs of each measure; namely, handicap versus physiological function. Presuming corroboration between these measures may be faulty. Average airflow in speech may be the most ecologically valid measure for patients with UVFP. Although aerodynamic measures changed between the time points, acoustic measures did not. Correlations to VHI-10 and change between time points may be found with other acoustic measures. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  18. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point cloud, we scan from multiple locations an object of known geometry (a cylinder mounted above a square box). Preliminary results show that even in a controlled experimental scan of an object of known dimensions, there is significant variability in the precision of the registered point cloud. For example, when 3 scans of the central object are registered using 4 known points (maximum time, maximum equipment), the point clouds align to within ~1 cm (normal to the object surface). However, when the same point clouds are registered with only 1 known point (minimum time, minimum equipment), misalignment of the point clouds can range from 2.5 to 5 cm, depending on target type. The greater misalignment of the 3 point clouds when registered with fewer known points stems from the field method employed in acquiring the dataset and demonstrates the impact of field workflow on LiDAR dataset precision. By quantifying the degree of scan mismatch in results such as this, we can provide users with the information needed to maximize efficiency in remote field surveys.

  19. A possible simplification for the estimation of area under the curve (AUC₀₋₁₂) of enteric-coated mycophenolate sodium in renal transplant patients receiving tacrolimus.

    PubMed

    Fleming, Denise H; Mathew, Binu S; Prasanna, Samuel; Annapandian, Vellaichamy M; John, George T

    2011-04-01

    Enteric-coated mycophenolate sodium (EC-MPS) is widely used in renal transplantation. With a delayed absorption profile, it has not been possible to develop limited sampling strategies to estimate area under the curve (mycophenolic acid [MPA] AUC₀₋₁₂), which have limited time points and are completed in 2 hours. We developed and validated simplified strategies to estimate MPA AUC₀₋₁₂ in an Indian renal transplant population prescribed EC-MPS together with prednisolone and tacrolimus. Intensive pharmacokinetic sampling (17 samples each) was performed in 18 patients to measure MPA AUC₀₋₁₂. The profiles at 1 month were used to develop the simplified strategies and those at 5.5 months used for validation. We followed two approaches. In one, the AUC was calculated using the trapezoidal rule with fewer time points followed by an extrapolation. In the second approach, by stepwise multiple regression analysis, models with different time points were identified and linear regression analysis performed. Using the trapezoidal rule, two equations were developed with six time points and sampling to 6 or 8 hours (8hrAUC[₀₋₁₂exp]) after the EC-MPS dose. On validation, the 8hrAUC(₀₋₁₂exp) compared with total measured AUC₀₋₁₂ had a coefficient of correlation (r²) of 0.872 with a bias and precision (95% confidence interval) of 0.54% (-6.07-7.15) and 9.73% (5.37-14.09), respectively. Second, limited sampling strategies were developed with four, five, six, seven, and eight time points and completion within 2 hours, 4 hours, 6 hours, and 8 hours after the EC-MPS dose. On validation, six, seven, and eight time point equations, all with sampling to 8 hours, had an acceptable r with the total measured MPA AUC₀₋₁₂ (0.817-0.927). In the six, seven, and eight time points, the bias (95% confidence interval) was 3.00% (-4.59 to 10.59), 0.29% (-5.4 to 5.97), and -0.72% (-5.34 to 3.89) and the precision (95% confidence interval) was 10.59% (5.06-16.13), 8.33% (4.55-12.1), and 6.92% (3.94-9.90), respectively. Of the eight simplified approaches, inclusion of seven or eight time points improved the accuracy of the predicted AUC compared with the actual and can be advocated based on the priority of the user.

  20. Interferometric Dynamic Measurement: Techniques Based on High-Speed Imaging or a Single Photodetector

    PubMed Central

    Fu, Yu; Pedrini, Giancarlo

    2014-01-01

    In recent years, optical interferometry-based techniques have been widely used to perform noncontact measurement of dynamic deformation in different industrial areas. In these applications, various physical quantities need to be measured in any instant and the Nyquist sampling theorem has to be satisfied along the time axis on each measurement point. Two types of techniques were developed for such measurements: one is based on high-speed cameras and the other uses a single photodetector. The limitation of the measurement range along the time axis in camera-based technology is mainly due to the low capturing rate, while the photodetector-based technology can only do the measurement on a single point. In this paper, several aspects of these two technologies are discussed. For the camera-based interferometry, the discussion includes the introduction of the carrier, the processing of the recorded images, the phase extraction algorithms in various domains, and how to increase the temporal measurement range by using multiwavelength techniques. For the detector-based interferometry, the discussion mainly focuses on the single-point and multipoint laser Doppler vibrometers and their applications for measurement under extreme conditions. The results show the effort done by researchers for the improvement of the measurement capabilities using interferometry-based techniques to cover the requirements needed for the industrial applications. PMID:24963503

  1. Is direct measurement of time possible?

    NASA Astrophysics Data System (ADS)

    Reynolds, Thomas

    2017-08-01

    Is direct measurement of time possible? The answer to this question may depend upon how one understands time. Is time an essential constituent of physical reality? Or is what scientists are talking about when they use the symbol ‘t’ or the word ‘time’ an human cultural construct, as the Chief of the USA NIST Divisions of Time and Frequency and of Quantum Physics has suggested. Few aspects of physics do not reference activity to time, but many discussions within either view of time seem to use one same, largely traditional, language of time. Briefly considering the question of measurement, including from a formal measure-theoretic point of view, clarifies the situation.

  2. Physical activity and post-treatment weight trajectory in anorexia nervosa

    PubMed Central

    Gianini, Loren M; Klein, Diane A; Call, Christine; Walsh, B. Timothy; Wang, Yuanjia; Wu, Peng; Attia, Evelyn

    2015-01-01

    Objective This study compared an objective measurement of physical activity (PA) in individuals with anorexia nervosa (AN) at low-weight, weight-restored, and post-treatment time points, and also compared PA in AN with that of healthy controls (HC). Method Sixty-one female inpatients with AN wore a novel accelerometer (the IDEEA) which measured PA at three time points: a) low-weight, b) weight-restored, and c) one month post-hospital discharge. Twenty-four HCs wore the IDEEA at one time point. Results Inpatients with AN became more physically active than they were at low-weight at weight restoration and following treatment discharge. Post-treatment patients with AN were more physically active than HCs during the day and less active at night, which was primarily accounted for by amount of time spent on feet, including standing and walking. Greater time spent on feet during the weight-restoration time point of inpatient treatment was associated with more rapid decrease in BMI over the 12 months following treatment discharge. Fidgeting did not differ between patients and controls, did not change with weight restoration, and did not predict post-treatment weight change. Discussion Use of a novel accelerometer demonstrated greater PA in AN than in healthy controls. PA following weight restoration in AN, particularly time spent in standing postures, may contribute to weight loss in the year following hospitalization. PMID:26712105

  3. Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model.

    PubMed

    Musekiwa, Alfred; Manda, Samuel O M; Mwambi, Henry G; Chen, Ding-Geng

    2016-01-01

    Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results.

  4. Intraoperative measurements on the mitral apparatus using optical tracking: a feasibility study

    NASA Astrophysics Data System (ADS)

    Engelhardt, Sandy; De Simone, Raffaele; Wald, Diana; Zimmermann, Norbert; Al Maisary, Sameer; Beller, Carsten J.; Karck, Matthias; Meinzer, Hans-Peter; Wolf, Ivo

    2014-03-01

    Mitral valve reconstruction is a widespread surgical method to repair incompetent mitral valves. During reconstructive surgery the judgement of mitral valve geometry and subvalvular apparatus is mandatory in order to choose for the appropriate repair strategy. To date, intraoperative analysis of mitral valve is merely based on visual assessment and inaccurate sizer devices, which do not allow for any accurate and standardized measurement of the complex three-dimensional anatomy. We propose a new intraoperative computer-assisted method for mitral valve measurements using a pointing instrument together with an optical tracking system. Sixteen anatomical points were defined on the mitral apparatus. The feasibility and the reproducibility of the measurements have been tested on a rapid prototyping (RP) heart model and a freshly exercised porcine heart. Four heart surgeons repeated the measurements three times on each heart. Morphologically important distances between the measured points are calculated. We achieved an interexpert variability mean of 2.28 +/- 1:13 mm for the 3D-printed heart and 2.45 +/- 0:75 mm for the porcine heart. The overall time to perform a complete measurement is 1-2 minutes, which makes the method viable for virtual annuloplasty during an intervention.

  5. A study of point discharge current observations in the thunderstorm environment at a tropical station during the year 1987 and 1988

    NASA Technical Reports Server (NTRS)

    Manohar, G. K.; Kandalgaonkar, S. S.; Sholapurkar, S. M.

    1991-01-01

    The results of the measurements of point discharge current observations at Pune, India, during years 1987 and 1988 are presented by categorizing and studying their number of spells, polar current average durations, and current magnitudes in day-time and night-time conditions. While the results showed that the thunderstorm activity occupies far more day-time than the night-time the level of current magnitudes remains nearly the same in the two categories.

  6. High speed FPGA-based Phasemeter for the far-infrared laser interferometers on EAST

    NASA Astrophysics Data System (ADS)

    Yao, Y.; Liu, H.; Zou, Z.; Li, W.; Lian, H.; Jie, Y.

    2017-12-01

    The far-infrared laser-based HCN interferometer and POlarimeter/INTerferometer\\break (POINT) system are important diagnostics for plasma density measurement on EAST tokamak. Both HCN and POINT provide high spatial and temporal resolution of electron density measurement and used for plasma density feedback control. The density is calculated by measuring the real-time phase difference between the reference beams and the probe beams. For long-pulse operations on EAST, the calculation of density has to meet the requirements of Real-Time and high precision. In this paper, a Phasemeter for far-infrared laser-based interferometers will be introduced. The FPGA-based Phasemeter leverages fast ADCs to obtain the three-frequency signals from VDI planar-diode Mixers, and realizes digital filters and an FFT algorithm in FPGA to provide real-time, high precision electron density output. Implementation of the Phasemeter will be helpful for the future plasma real-time feedback control in long-pulse discharge.

  7. Reliability of Two Smartphone Applications for Radiographic Measurements of Hallux Valgus Angles.

    PubMed

    Mattos E Dinato, Mauro Cesar; Freitas, Marcio de Faria; Milano, Cristiano; Valloto, Elcio; Ninomiya, André Felipe; Pagnano, Rodrigo Gonçalves

    The objective of the present study was to assess the reliability of 2 smartphone applications compared with the traditional goniometer technique for measurement of radiographic angles in hallux valgus and the time required for analysis with the different methods. The radiographs of 31 patients (52 feet) with a diagnosis of hallux valgus were analyzed. Four observers, 2 with >10 years' experience in foot and ankle surgery and 2 in-training surgeons, measured the hallux valgus angle and intermetatarsal angle using a manual goniometer technique and 2 smartphone applications (Hallux Angles and iPinPoint). The interobserver and intermethod reliability were estimated using intraclass correlation coefficients (ICCs), and the time required for measurement of the angles among the 3 methods was compared using the Friedman test. A very good or good interobserver reliability was found among the 4 observers measuring the hallux valgus angle and intermetatarsal angle using the goniometer (ICC 0.913 and 0.821, respectively) and iPinPoint (ICC 0.866 and 0.638, respectively). Using the Hallux Angles application, a very good interobserver reliability was found for measurements of the hallux valgus angle (ICC 0.962) and intermetatarsal angle (ICC 0.935) only among the more experienced observers. The time required for the measurements was significantly shorter for the measurements using both smartphone applications compared with the goniometer method. One smartphone application (iPinPoint) was reliable for measurements of the hallux valgus angles by either experienced or nonexperienced observers. The use of these tools might save time in the evaluation of radiographic angles in the hallux valgus. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  8. Reliability of infrared thermometric measurements of skin temperature in the hand.

    PubMed

    Packham, Tara L; Fok, Diana; Frederiksen, Karen; Thabane, Lehana; Buckley, Norman

    2012-01-01

    Clinical measurement study. Skin temperature asymmetries (STAs) are used in the diagnosis of complex regional pain syndrome (CRPS), but little evidence exists for reliability of the equipment and methods. This study examined the reliability of an inexpensive infrared (IR) thermometer and measurement points in the hand for the study of STA. ST was measured three times at five points on both hands with an IR thermometer by two raters in 20 volunteers (12 normals and 8 CRPS). ST measurement results using IR thermometers support inter-rater reliability: intraclass correlation coefficient (ICC) estimate for single measures 0.80; all ST measurement points were also highly reliable (ICC single measures, 0.83-0.91). The equipment demonstrated excellent reliability, with little difference in the reliability of the five measurement sites. These preliminary findings support their use in future CRPS research. Not applicable. Copyright © 2012 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.

  9. Smoke-Point Properties of Nonbuoyant Round Laminar Jet Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Urban, D. L.; Yuan, Z.-G.; Sunderland, R. B.; Lin, K.-C.; Dai, Z.; Faeth, G. M.

    2000-01-01

    The laminar smoke-point properties of nonbuoyant round laminar jet diffusion flames were studied emphasizing results from long duration (100-230 s) experiments at microgravity carried -out on- orbit in the Space Shuttle Columbia. Experimental conditions included ethylene-and propane-fueled flames burning in still air at an ambient temperature of 300 K, initial jet exit diameters of 1.6 and 2.7 mm, jet exit velocities of 170-1630 mm/s, jet exit Reynolds numbers of 46-172, characteristic flame residence times of 40-302 ms, and luminous flame lengths of 15-63 mm. The onset of laminar smoke-point conditions involved two flame configurations: closed-tip flames with first soot emissions along the flame axis and open-tip flames with first soot emissions from an annular ring about the flame axis. Open-tip flames were observed at large characteristic flame residence times with the onset of soot emissions associated with radiative quenching near the flame tip; nevertheless, unified correlations of laminar smoke-point properties were obtained that included both flame configurations. Flame lengths at laminar smoke-point conditions were well-correlated in terms of a corrected fuel flow rate suggested by a simplified analysis of flame shape. The present steady and nonbuoyant flames emitted soot more readily than earlier tests of nonbuoyant flames at microgravity using ground-based facilities and of buoyant flames at normal gravity due to reduced effects of unsteadiness, flame disturbances and buoyant motion. For example, laminar smoke-point flame lengths from ground-based microgravity measurements were up to 2.3 times longer and from buoyant flame measurements were up to 6.4 times longer than the present measurements at comparable conditions. Finally, present laminar smoke-point flame lengths were roughly inversely proportional to pressure, which is a somewhat slower variation than observed during earlier tests both at microgravity using ground-based facilities and at normal gravity.

  10. Smoke-Point Properties of Nonbuoyant Round Laminar Jet Diffusion Flames. Appendix B

    NASA Technical Reports Server (NTRS)

    Urban, D. L.; Yuan, Z.-G.; Sunderland, P. B.; Lin, K.-C.; Dai, Z.; Faeth, G. M.; Ross, H. D. (Technical Monitor)

    2000-01-01

    The laminar smoke-point properties of non-buoyant round laminar jet diffusion flames were studied emphasizing results from long-duration (100-230 s) experiments at microgravity carried out in orbit aboard the space shuttle Columbia. Experimental conditions included ethylene- and propane-fueled flames burning in still air at an ambient temperature of 300 K, pressures of 35-130 kPa, jet exit diameters of 1.6 and 2.7 mm, jet exit velocities of 170-690 mm/s, jet exit Reynolds numbers of 46-172, characteristic flame residence times of 40-302 ms, and luminous flame lengths of 15-63 mm. Contrary to the normal-gravity laminar smoke point, in microgravity the onset of laminar smoke-point conditions involved two flame configurations: closed-tip flames with soot emissions along the flame axis and open-tip flames with soot emissions from an annular ring about the flame axis. Open-tip flames were observed at large characteristic flame residence times with the onset of soot emissions associated with radiative quenching near the flame tip: nevertheless, unified correlations of laminar smoke-point properties were obtained that included both flame configurations. Flame lengths at laminar smoke-point conditions were well correlated in terms of a corrected fuel flow rate suggested by a simplified analysis of flame shape. The present steady and nonbuoyant flames emitted soot more readily than non-buoyant flames in earlier tests using ground-based microgravity facilities and than buoyant flames at normal gravity, as a result of reduced effects of unsteadiness, flame disturbances, and buoyant motion. For example, present measurements of laminar smokepoint flame lengths at comparable conditions were up to 2.3 times shorter than ground-based microgravity measurements and up to 6.4 times shorter than buoyant flame measurements. Finally, present laminar smoke-point flame lengths were roughly inversely proportional to pressure to a degree that is a somewhat smaller than observed during earlier tests both at microgravity (using ground-based facilities) and at normal gravity,

  11. Space Technology 5 Multi-point Measurements of Near-Earth Magnetic Fields: Initial Results

    NASA Technical Reports Server (NTRS)

    Slavin, James A.; Le, G.; Strangeway, R. L.; Wang, Y.; Boardsen, S.A.; Moldwin, M. B.; Spence, H. E.

    2007-01-01

    The Space Technology 5 (ST-5) mission successfully placed three micro-satellites in a 300 x 4500 km dawn-dusk orbit on 22 March 2006. Each spacecraft carried a boom-mounted vector fluxgate magnetometer that returned highly sensitive and accurate measurements of the geomagnetic field. These data allow, for the first time, the separation of temporal and spatial variations in field-aligned current (FAC) perturbations measured in low-Earth orbit on time scales of approximately 10 sec to 10 min. The constellation measurements are used to directly determine field-aligned current sheet motion, thickness and current density. In doing so, we demonstrate two multi-point methods for the inference of FAC current density that have not previously been possible in low-Earth orbit; 1) the "standard method," based upon s/c velocity, but corrected for FAC current sheet motion, and 2) the "gradiometer method" which uses simultaneous magnetic field measurements at two points with known separation. Future studies will apply these methods to the entire ST-5 data set and expand to include geomagnetic field gradient analyses as well as field-aligned and ionospheric currents.

  12. Metabolic changes in serum steroids induced by total-body irradiation of female C57B/6 mice.

    PubMed

    Moon, Ju-Yeon; Shin, Hee-June; Son, Hyun-Hwa; Lee, Jeongae; Jung, Uhee; Jo, Sung-Kee; Kim, Hyun Sik; Kwon, Kyung-Hoon; Park, Kyu Hwan; Chung, Bong Chul; Choi, Man Ho

    2014-05-01

    The short- and long-term effects of a single exposure to gamma radiation on steroid metabolism were investigated in mice. Gas chromatography-mass spectrometry was used to generate quantitative profiles of serum steroid levels in mice that had undergone total-body irradiation (TBI) at doses of 0Gy, 1Gy, and 4Gy. Following TBI, serum samples were collected at the pre-dose time point and 1, 3, 6, and 9 months after TBI. Serum levels of progestins, progesterone, 5β-DHP, 5α-DHP, and 20α-DHP showed a significant down-regulation following short-term exposure to 4Gy, with the exception of 20α-DHP, which was significantly decreased at each of the time points measured. The corticosteroids 5α-THDOC and 5α-DHB were significantly elevated at each of the time points measured after exposure to either 1 or 4Gy. Among the sterols, 24S-OH-cholestoerol showed a dose-related elevation after irradiation that reached significance in the high dose group at the 6- and 9-month time points. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Real-time viability and apoptosis kinetic detection method of 3D multicellular tumor spheroids using the Celigo Image Cytometer.

    PubMed

    Kessel, Sarah; Cribbes, Scott; Bonasu, Surekha; Rice, William; Qiu, Jean; Chan, Leo Li-Ying

    2017-09-01

    The development of three-dimensional (3D) multicellular tumor spheroid models for cancer drug discovery research has increased in the recent years. The use of 3D tumor spheroid models may be more representative of the complex in vivo tumor microenvironments in comparison to two-dimensional (2D) assays. Currently, viability of 3D multicellular tumor spheroids has been commonly measured on standard plate-readers using metabolic reagents such as CellTiter-Glo® for end point analysis. Alternatively, high content image cytometers have been used to measure drug effects on spheroid size and viability. Previously, we have demonstrated a novel end point drug screening method for 3D multicellular tumor spheroids using the Celigo Image Cytometer. To better characterize the cancer drug effects, it is important to also measure the kinetic cytotoxic and apoptotic effects on 3D multicellular tumor spheroids. In this work, we demonstrate the use of PI and caspase 3/7 stains to measure viability and apoptosis for 3D multicellular tumor spheroids in real-time. The method was first validated by staining different types of tumor spheroids with PI and caspase 3/7 and monitoring the fluorescent intensities for 16 and 21 days. Next, PI-stained and nonstained control tumor spheroids were digested into single cell suspension to directly measure viability in a 2D assay to determine the potential toxicity of PI. Finally, extensive data analysis was performed on correlating the time-dependent PI and caspase 3/7 fluorescent intensities to the spheroid size and necrotic core formation to determine an optimal starting time point for cancer drug testing. The ability to measure real-time viability and apoptosis is highly important for developing a proper 3D model for screening tumor spheroids, which can allow researchers to determine time-dependent drug effects that usually are not captured by end point assays. This would improve the current tumor spheroid analysis method to potentially better identify more qualified cancer drug candidates for drug discovery research. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  14. Whole-body tissue distribution of total radioactivity in rats after oral administration of [¹⁴C]-bilastine.

    PubMed

    Lucero, María Luisa; Patterson, Andrew B

    2012-06-01

    This study evaluated the tissue distribution of total radioactivity in male albino, male pigmented, and time-mated female albino rats after oral administration of a single dose of [¹⁴C]-bilastine (20 mg/kg). Although only 1 animal was analyzed at each time point, there were apparent differences in bilastine distribution. Radioactivity was distributed to only a few tissues at low levels in male rats, whereas distribution was more extensive and at higher levels in female rats. This may be a simple sex-related difference. In each group and at each time point, concentrations of radioactivity were high in the liver and kidney, reflecting the role of these organs in the elimination process. In male albino rats, no radioactivity was measurable by 72 hours postdose. In male pigmented rats, only the eye and uveal tract had measurable levels of radioactivity at 24 hours. Measureable levels of radioactivity were retained in these tissues at the final sampling time point (336 hours postdose), indicating a degree of melanin-associated binding. In time-mated female rats, but not in albino or pigmented male rats, there was evidence of low-level passage of radioactivity across the placental barrier into fetal tissues as well as low-level transfer of radioactivity into the brain.

  15. Use of Repeated Blood Pressure and Cholesterol Measurements to Improve Cardiovascular Disease Risk Prediction: An Individual-Participant-Data Meta-Analysis

    PubMed Central

    Barrett, Jessica; Pennells, Lisa; Sweeting, Michael; Willeit, Peter; Di Angelantonio, Emanuele; Gudnason, Vilmundur; Nordestgaard, Børge G.; Psaty, Bruce M; Goldbourt, Uri; Best, Lyle G; Assmann, Gerd; Salonen, Jukka T; Nietert, Paul J; Verschuren, W. M. Monique; Brunner, Eric J; Kronmal, Richard A; Salomaa, Veikko; Bakker, Stephan J L; Dagenais, Gilles R; Sato, Shinichi; Jansson, Jan-Håkan; Willeit, Johann; Onat, Altan; de la Cámara, Agustin Gómez; Roussel, Ronan; Völzke, Henry; Dankner, Rachel; Tipping, Robert W; Meade, Tom W; Donfrancesco, Chiara; Kuller, Lewis H; Peters, Annette; Gallacher, John; Kromhout, Daan; Iso, Hiroyasu; Knuiman, Matthew; Casiglia, Edoardo; Kavousi, Maryam; Palmieri, Luigi; Sundström, Johan; Davis, Barry R; Njølstad, Inger; Couper, David; Danesh, John; Thompson, Simon G; Wood, Angela

    2017-01-01

    Abstract The added value of incorporating information from repeated blood pressure and cholesterol measurements to predict cardiovascular disease (CVD) risk has not been rigorously assessed. We used data on 191,445 adults from the Emerging Risk Factors Collaboration (38 cohorts from 17 countries with data encompassing 1962–2014) with more than 1 million measurements of systolic blood pressure, total cholesterol, and high-density lipoprotein cholesterol. Over a median 12 years of follow-up, 21,170 CVD events occurred. Risk prediction models using cumulative mean values of repeated measurements and summary measures from longitudinal modeling of the repeated measurements were compared with models using measurements from a single time point. Risk discrimination (C-index) and net reclassification were calculated, and changes in C-indices were meta-analyzed across studies. Compared with the single-time-point model, the cumulative means and longitudinal models increased the C-index by 0.0040 (95% confidence interval (CI): 0.0023, 0.0057) and 0.0023 (95% CI: 0.0005, 0.0042), respectively. Reclassification was also improved in both models; compared with the single-time-point model, overall net reclassification improvements were 0.0369 (95% CI: 0.0303, 0.0436) for the cumulative-means model and 0.0177 (95% CI: 0.0110, 0.0243) for the longitudinal model. In conclusion, incorporating repeated measurements of blood pressure and cholesterol into CVD risk prediction models slightly improves risk prediction. PMID:28549073

  16. The variability in Oxford hip and knee scores in the preoperative period: is there an ideal time to score?

    PubMed

    Quah, C; Holmes, D; Khan, T; Cockshott, S; Lewis, J; Stephen, A

    2018-01-01

    Background All NHS-funded providers are required to collect and report patient-reported outcome measures for hip and knee arthroplasty. Although there are established guidelines for timing such measures following arthroplasty, there are no specific time-points for collection in the preoperative period. The primary aim of this study was to identify whether there was a significant amount of variability in the Oxford hip and knee scores prior to surgical intervention when completed in the outpatient clinic at the time of listing for arthroplasty or when completed at the preoperative assessment clinic. Methods A prospective cohort study of patients listed for primary hip or knee arthroplasty was conducted. Patients were asked to fill in a preoperative Oxford score in the outpatient clinic at the time of listing. They were then invited to fill in the official outcome measures questionnaire at the preoperative assessment clinic. The postoperative Oxford score was then completed when the patient was seen again at their postoperative follow up in clinic. Results Of the total of 109 patients included in this study period, there were 18 (17%) who had a worse score of 4 or more points difference and 43 (39.4%) who had an improvement of 4 or more points difference when the scores were compared between time of listing at the outpatient and at the preoperative assessment clinic. There was a statistically significant difference (P = 0.0054) in the mean Oxford scores. Conclusions The results of our study suggest that there should be standardisation of timing for completing the preoperative patient-reported outcome measures.

  17. Dew point fast measurement in organic vapor mixtures using quartz resonant sensor

    NASA Astrophysics Data System (ADS)

    Nie, Jing; Liu, Jia; Meng, Xiaofeng

    2017-01-01

    A fast dew point sensor has been developed for organic vapor mixtures by using the quartz crystal with sensitive circuits. The sensor consists of the quartz crystal and a cooler device. Proactive approach is taken to produce condensation on the surface of the quartz crystal, and it will lead to a change in electrical features of the quartz crystal. The cessation of oscillation was measured because this phenomenon is caused by dew condensation. Such a phenomenon can be used to detect the dew point. This method exploits the high sensitivity of the quartz crystal but without frequency measurement and also retains the stability of the resonant circuit. It is strongly anti-interfered. Its performance was evaluated with acetone-methanol mixtures under different pressures. The results were compared with the dew points predicted from the universal quasi-chemical equation to evaluate the performance of the proposed sensor. Though the maximum deviations of the sensor are less than 1.1 °C, it still has a fast response time with a recovery time of less than 10 s, providing an excellent dehumidifying performance.

  18. Dew point fast measurement in organic vapor mixtures using quartz resonant sensor.

    PubMed

    Nie, Jing; Liu, Jia; Meng, Xiaofeng

    2017-01-01

    A fast dew point sensor has been developed for organic vapor mixtures by using the quartz crystal with sensitive circuits. The sensor consists of the quartz crystal and a cooler device. Proactive approach is taken to produce condensation on the surface of the quartz crystal, and it will lead to a change in electrical features of the quartz crystal. The cessation of oscillation was measured because this phenomenon is caused by dew condensation. Such a phenomenon can be used to detect the dew point. This method exploits the high sensitivity of the quartz crystal but without frequency measurement and also retains the stability of the resonant circuit. It is strongly anti-interfered. Its performance was evaluated with acetone-methanol mixtures under different pressures. The results were compared with the dew points predicted from the universal quasi-chemical equation to evaluate the performance of the proposed sensor. Though the maximum deviations of the sensor are less than 1.1 °C, it still has a fast response time with a recovery time of less than 10 s, providing an excellent dehumidifying performance.

  19. Double peak-induced distance error in short-time-Fourier-transform-Brillouin optical time domain reflectometers event detection and the recovery method.

    PubMed

    Yu, Yifei; Luo, Linqing; Li, Bo; Guo, Linfeng; Yan, Jize; Soga, Kenichi

    2015-10-01

    The measured distance error caused by double peaks in the BOTDRs (Brillouin optical time domain reflectometers) system is a kind of Brillouin scattering spectrum (BSS) deformation, discussed and simulated for the first time in the paper, to the best of the authors' knowledge. Double peak, as a kind of Brillouin spectrum deformation, is important in the enhancement of spatial resolution, measurement accuracy, and crack detection. Due to the variances of the peak powers of the BSS along the fiber, the measured starting point of a step-shape frequency transition region is shifted and results in distance errors. Zero-padded short-time-Fourier-transform (STFT) can restore the transition-induced double peaks in the asymmetric and deformed BSS, thus offering more accurate and quicker measurements than the conventional Lorentz-fitting method. The recovering method based on the double-peak detection and corresponding BSS deformation can be applied to calculate the real starting point, which can improve the distance accuracy of the STFT-based BOTDR system.

  20. The endurance of the effects of the penalty point system in Spain three years after. Main influencing factors.

    PubMed

    Izquierdo, F Aparicio; Ramírez, B Arenas; McWilliams, J M Mira; Ayuso, J Páez

    2011-05-01

    In this work we have used ARIMA time series models to analyse the contribution of the penalty point system, the most important legislative measure for driving licences, in reducing the number of fatalities over 24h on the roads in Spain during the study period (January 1995 to June 2009). In addition, because of this long period of analysis, other control variables were introduced to model the enactment of the Reform of the Penal Code in December 2007, together with other more specific effects needed to fit the model correctly. The ARIMA intervention models methodology combines the basic features of specific times series models: it controls the trend and seasonal variation in data that is present when modelling the structure through autoregressive and moving average parameters and allows for inserting step or impulse input variables for checking and evaluating the effects of deterministic measures, such as legislative changes which are the object of study in this work. This paper analyses the surveillance and control measures introduced in the periods before and after the implementation of the penalty point system and helps to partly explain its apparent endurance over time. The results show that the introduction of the penalty point system in Spain had a very positive effect in reducing the number of fatalities (over 24h) on the road, and that this effect has endured up to the present time. This success may be due to the continuing increase in surveillance measures and fines as well the significantly growing interest shown by the news media in road safety since the measures were introduced. All this has led to positive changes in driver behaviour. It is, therefore, a combination of three factors: the penalty point system, the gradual stepping up of surveillance measures and sanctions, and the publicity given to road safety issues in the mass media would appear to be the key to success. The absence of any of these three factors would have predictably led to a far less positive evolution of the accident rate on Spanish roads. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Theory of two-point correlations of jet noise

    NASA Technical Reports Server (NTRS)

    Ribner, H. S.

    1976-01-01

    A large body of careful experimental measurements of two-point correlations of far field jet noise was carried out. The model of jet-noise generation is an approximate version of an earlier work of Ribner, based on the foundations of Lighthill. The model incorporates isotropic turbulence superimposed on a specified mean shear flow, with assumed space-time velocity correlations, but with source convection neglected. The particular vehicle is the Proudman format, and the previous work (mean-square pressure) is extended to display the two-point space-time correlations of pressure. The shape of polar plots of correlation is found to derive from two main factors: (1) the noncompactness of the source region, which allows differences in travel times to the two microphones - the dominant effect; (2) the directivities of the constituent quadrupoles - a weak effect. The noncompactness effect causes the directional lobes in a polar plot to have pointed tips (cusps) and to be especially narrow in the plane of the jet axis. In these respects, and in the quantitative shapes of the normalized correlation curves, results of the theory show generally good agreement with Maestrello's experimental measurements.

  2. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  3. Predictive Trip Detection for Nuclear Power Plants

    NASA Astrophysics Data System (ADS)

    Rankin, Drew J.; Jiang, Jin

    2016-08-01

    This paper investigates the use of a Kalman filter (KF) to predict, within the shutdown system (SDS) of a nuclear power plant (NPP), whether safety parameter measurements have reached a trip set-point. In addition, least squares (LS) estimation compensates for prediction error due to system-model mismatch. The motivation behind predictive shutdown is to reduce the amount of time between the occurrence of a fault or failure and the time of trip detection, referred to as time-to-trip. These reductions in time-to-trip can ultimately lead to increases in safety and productivity margins. The proposed predictive SDS differs from conventional SDSs in that it compares point-predictions of the measurements, rather than sensor measurements, against trip set-points. The predictive SDS is validated through simulation and experiments for the steam generator water level safety parameter. Performance of the proposed predictive SDS is compared against benchmark conventional SDS with respect to time-to-trip. In addition, this paper analyzes: prediction uncertainty, as well as; the conditions under which it is possible to achieve reduced time-to-trip. Simulation results demonstrate that on average the predictive SDS reduces time-to-trip by an amount of time equal to the length of the prediction horizon and that the distribution of times-to-trip is approximately Gaussian. Experimental results reveal that a reduced time-to-trip can be achieved in a real-world system with unknown system-model mismatch and that the predictive SDS can be implemented with a scan time of under 100ms. Thus, this paper is a proof of concept for KF/LS-based predictive trip detection.

  4. Accurate dew-point measurement over a wide temperature range using a quartz crystal microbalance dew-point sensor

    NASA Astrophysics Data System (ADS)

    Kwon, Su-Yong; Kim, Jong-Chul; Choi, Buyng-Il

    2008-11-01

    Quartz crystal microbalance (QCM) dew-point sensors are based on frequency measurement, and so have fast response time, high sensitivity and high accuracy. Recently, we have reported that they have the very convenient attribute of being able to distinguish between supercooled dew and frost from a single scan through the resonant frequency of the quartz resonator as a function of the temperature. In addition to these advantages, by using three different types of heat sinks, we have developed a QCM dew/frost-point sensor with a very wide working temperature range (-90 °C to 15 °C). The temperature of the quartz surface can be obtained effectively by measuring the temperature of the quartz crystal holder and using temperature compensation curves (which showed a high level of repeatability and reproducibility). The measured dew/frost points showed very good agreement with reference values and were within ±0.1 °C over the whole temperature range.

  5. A novel method for the line-of-response and time-of-flight reconstruction in TOF-PET detectors based on a library of synchronized model signals

    NASA Astrophysics Data System (ADS)

    Moskal, P.; Zoń, N.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kamińska, D.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Raczyński, L.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Zieliński, M.

    2015-03-01

    A novel method of hit time and hit position reconstruction in scintillator detectors is described. The method is based on comparison of detector signals with results stored in a library of synchronized model signals registered for a set of well-defined positions of scintillation points. The hit position is reconstructed as the one corresponding to the signal from the library which is most similar to the measurement signal. The time of the interaction is determined as a relative time between the measured signal and the most similar one in the library. A degree of similarity of measured and model signals is defined as the distance between points representing the measurement- and model-signal in the multi-dimensional measurement space. Novelty of the method lies also in the proposed way of synchronization of model signals enabling direct determination of the difference between time-of-flights (TOF) of annihilation quanta from the annihilation point to the detectors. The introduced method was validated using experimental data obtained by means of the double strip prototype of the J-PET detector and 22Na sodium isotope as a source of annihilation gamma quanta. The detector was built out from plastic scintillator strips with dimensions of 5 mm×19 mm×300 mm, optically connected at both sides to photomultipliers, from which signals were sampled by means of the Serial Data Analyzer. Using the introduced method, the spatial and TOF resolution of about 1.3 cm (σ) and 125 ps (σ) were established, respectively.

  6. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, H; Guerrero, M; Prado, K

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors,more » cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.« less

  8. Water Triple-Point Comparisons: Plateau Averaging or Peak Value?

    NASA Astrophysics Data System (ADS)

    Steur, P. P. M.; Dematteis, R.

    2014-04-01

    With a certain regularity, national metrology institutes conduct comparisons of water triple-point (WTP) cells. The WTP is the most important fixed point for the International Temperature Scale of 1990 (ITS-90). In such comparisons, it is common practice to simply average all the single measured temperature points obtained on a single ice mantle. This practice is quite reasonable whenever the measurements show no time dependence in the results. Ever since the first Supplementary Information for the International Temperature Scale of 1990, published by the Bureau International des Poids et Mesures in Sèvres, it was strongly suggested to wait at least 1 day before taking measurements (now up to 10 days), in order for a newly created ice mantle to stabilize. This stabilization is accompanied by a change in temperature with time. A recent improvement in the sensitivity of resistance measurement enabled the Istituto Nazionale di Ricerca Metrologica to detect more clearly the (possible) change in temperature with time of the WTP on a single ice mantle, as for old borosilicate cells. A limited investigation was performed where the temperature of two cells was monitored day-by-day, from the moment of mantle creation, where it was found that with (old) borosilicate cells it may be counterproductive to wait the usual week before starting measurements. The results are presented and discussed, and it is suggested to adapt the standard procedure for comparisons of WTP cells allowing for a different data treatment with (old) borosilicate cells, because taking the temperature dependence into account will surely reduce the reported differences between cells.

  9. Effects of time and sampling location on concentrations of β-hydroxybutyric acid in dairy cows.

    PubMed

    Mahrt, A; Burfeind, O; Heuwieser, W

    2014-01-01

    Two trials were conducted to examine factors potentially influencing the measurement of blood β-hydroxybutyric acid (BHBA) in dairy cows. The objective of the first trial was to study effects of sampling time on BHBA concentration in continuously fed dairy cows. Furthermore, we determined test characteristics of a single BHBA measurement at a random time of the day to diagnose subclinical ketosis considering commonly used cut-points (1.2 and 1.4 mmol/L). Finally, we set out to evaluate if test characteristics could be enhanced by repeating measurements after different time intervals. During 4 herd visits, a total of 128 cows (8 to 28 d in milk) fed 10 times daily were screened at 0900 h and preselected by BHBA concentration. Blood samples were drawn from the tail vessels and BHBA concentrations were measured using an electronic BHBA meter (Precision Xceed, Abbott Diabetes Care Ltd., Witney, UK). Cows with BHBA concentrations ≥0.8 mmol/L at this time were enrolled in the trial (n=92). Subsequent BHBA measurements took place every 3h for a total of 8 measurements during 24 h. The effect of sampling time on BHBA concentrations was tested in a repeated-measures ANOVA repeating sampling time. Sampling time did not affect BHBA concentrations in continuously fed dairy cows. Defining the average daily BHBA concentration calculated from the 8 measurements as the gold standard, a single measurement at a random time of the day to diagnose subclinical ketosis had a sensitivity of 0.90 or 0.89 at the 2 BHBA cut-points (1.2 and 1.4 mmol/L). Specificity was 0.88 or 0.90 using the same cut-points. Repeating measurements after different time intervals improved test characteristics only slightly. In the second experiment, we compared BHBA concentrations of samples drawn from 3 different blood sampling locations (tail vessels, jugular vein, and mammary vein) of 116 lactating dairy cows. Concentrations of BHBA differed in samples from the 3 sampling locations. Mean BHBA concentration was 0.3 mmol/L lower when measured in the mammary vein compared with the jugular vein and 0.4 mmol/L lower in the mammary vein compared with the tail vessels. We conclude that to measure BHBA, blood samples of continuously fed dairy cows can be drawn at any time of the day. A single measurement provides very good test characteristics for on-farm conditions. Blood samples for BHBA measurement should be drawn from the jugular vein or tail vessels; the mammary vein should not be used for this purpose. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Accuracy assessment of the Precise Point Positioning method applied for surveys and tracking moving objects in GIS environment

    NASA Astrophysics Data System (ADS)

    Ilieva, Tamara; Gekov, Svetoslav

    2017-04-01

    The Precise Point Positioning (PPP) method gives the users the opportunity to determine point locations using a single GNSS receiver. The accuracy of the determined by PPP point locations is better in comparison to the standard point positioning, due to the precise satellite orbit and clock corrections that are developed and maintained by the International GNSS Service (IGS). The aim of our current research is the accuracy assessment of the PPP method applied for surveys and tracking moving objects in GIS environment. The PPP data is collected by using preliminary developed by us software application that allows different sets of attribute data for the measurements and their accuracy to be used. The results from the PPP measurements are directly compared within the geospatial database to different other sets of terrestrial data - measurements obtained by total stations, real time kinematic and static GNSS.

  11. Coping capacities for improving adaptation pathways for flood protection in Can Tho, Vietnam

    NASA Astrophysics Data System (ADS)

    Pathirana, A.; Radhakrishnan, M.; Quan, N. H.; Gersonius, B.; Ashley, R.; Zevenbergen, C.

    2016-12-01

    Studying the evolution of coping and adaptation capacities is a prerequisite for preparing an effective flood management plan for the future, especially in the dynamic and fast changing cities of developing countries. The objectives, requirements, targets, design and performance of flood protection measures will have to be determined after taking into account, or in conjunction with, the coping capacities. A methodology is presented based on adaptation pathways to account for coping capacities and to assess the effect on flood protection measures. The adaptation pathways method determines the point of failure of a particular strategy based on the change in an external driver, a point in time or a socio economic situation where / at which the strategy can no longer meet its objective. Pathways arrived at based on this methodology reflect future reality by considering changing engineering standards along with future uncertainties, risk taking abilities and adaptation capacities. This pathways based methodology determines the Adaptation tipping points (ATP), `time of occurrence of ATP' of flood protection measures after accounting for coping capacities, evaluates the measures and then provides the means to determine the adaptation pathways. Application of this methodology for flood protection measures in Can Tho city in the Mekong delta reveals the effect of coping capacity on the usefulness of flood protection measures and the delay in occurrence of tipping points. Consideration of coping capacity in the system owing to elevated property floor levels lead to the postponement of tipping points and improved the adaptation pathways comprising flood protection measures such as dikes. This information is useful to decision makers for planning and phasing of investments in flood protection.

  12. Active and Passive Sensing from Geosynchronous and Libration Orbits

    NASA Technical Reports Server (NTRS)

    Schoeberl, Mark; Raymond, Carol; Hildebrand, Peter

    2003-01-01

    The development of the LEO (EOS) missions has led the way to new technologies and new science discoveries. However, LEO measurements alone cannot cost effectively produce high time resolution measurements needed to move the science to the next level. Both GEO and the Lagrange points, L1 and L2, provide vantage points that will allow higher time resolution measurements. GEO is currently being exploited by weather satellites, but the sensors currently operating at GEO do not provide the spatial or spectral resolution needed for atmospheric trace gas, ocean or land surface measurements. It is also may be possible to place active sensors in geostationary orbit. It seems clear, that the next era in earth observation and discovery will be opened by sensor systems operating beyond near earth orbit.

  13. Portable Dew Point Mass Spectrometry System for Real-Time Gas and Moisture Analysis

    NASA Technical Reports Server (NTRS)

    Arkin, C.; Gillespie, Stacey; Ratzel, Christopher

    2010-01-01

    A portable instrument incorporates both mass spectrometry and dew point measurement to provide real-time, quantitative gas measurements of helium, nitrogen, oxygen, argon, and carbon dioxide, along with real-time, quantitative moisture analysis. The Portable Dew Point Mass Spectrometry (PDP-MS) system comprises a single quadrupole mass spectrometer and a high vacuum system consisting of a turbopump and a diaphragm-backing pump. A capacitive membrane dew point sensor was placed upstream of the MS, but still within the pressure-flow control pneumatic region. Pressure-flow control was achieved with an upstream precision metering valve, a capacitance diaphragm gauge, and a downstream mass flow controller. User configurable LabVIEW software was developed to provide real-time concentration data for the MS, dew point monitor, and sample delivery system pressure control, pressure and flow monitoring, and recording. The system has been designed to include in situ, NIST-traceable calibration. Certain sample tubing retains sufficient water that even if the sample is dry, the sample tube will desorb water to an amount resulting in moisture concentration errors up to 500 ppm for as long as 10 minutes. It was determined that Bev-A-Line IV was the best sample line to use. As a result of this issue, it is prudent to add a high-level humidity sensor to PDP-MS so such events can be prevented in the future.

  14. Use of precision time and time interval (PTTI)

    NASA Technical Reports Server (NTRS)

    Taylor, J. D.

    1974-01-01

    A review of range time synchronization methods are discussed as an important aspect of range operations. The overall capabilities of various missile ranges to determine precise time of day by synchronizing to available references and applying this time point to instrumentation for time interval measurements are described.

  15. A Framework for Validating Traffic Simulation Models at the Vehicle Trajectory Level

    DOT National Transportation Integrated Search

    2017-03-01

    Based on current practices, traffic simulation models are calibrated and validated using macroscopic measures such as 15-minute averages of traffic counts or average point-to-point travel times. For an emerging number of applications, including conne...

  16. Fast REDOR with CPMG multiple-echo acquisition

    NASA Astrophysics Data System (ADS)

    Hung, Ivan; Gan, Zhehong

    2014-01-01

    Rotational-Echo Double Resonance (REDOR) is a widely used experiment for distance measurements in solids. The conventional REDOR experiment measures the signal dephasing from hetero-nuclear recoupling under magic-angle spinning (MAS) in a point by point manner. A modified Carr-Purcell Meiboom-Gill (CPMG) multiple-echo scheme is introduced for fast REDOR measurement. REDOR curves are measured from the CPMG echo amplitude modulation under dipolar recoupling. The real time CPMG-REDOR experiment can speed up the measurement by an order of magnitude. The effects from hetero-nuclear recoupling, the Bloch-Siegert shift and echo truncation to the signal acquisition are discussed and demonstrated.

  17. Intelligent Chilled Mirror Humidity Sensor

    DTIC Science & Technology

    1988-12-01

    measure- ments made 8 times each day (roughly 3000 measurements). 3. Accuracy of a dew point measurement is to be within 0.50C. Some means for warning of...unreliable and was eventually discarded from the tests. As a replacement, we chose a manually operated sling psychrometer (Assman Corp.) which, though...electronically to lengthen the useful time that measurements can be made between cleanings. Once the mirror becomes ex- cessively dirty, further

  18. Social support buffers the effects of terrorism on adolescent depression: findings from Sderot, Israel.

    PubMed

    Henrich, Christopher C; Shahar, Golan

    2008-09-01

    This prospective study of 29 Israeli middle school students experiencing terror attacks by Qassam rockets addressed whether higher levels of baseline social support protected adolescents from adverse psychological effects of exposure to rocket attacks. Participants were assessed at two time points 5 months apart, before and after a period of military escalation from May to September 2007. Adolescent self-reported depression was measured at both time points, using the Center for Epidemiological Studies-Child Depression Scale. Social support from family, friends, and school was measured at time 1, via a short form of the Perceived Social Support Scale. Adolescents also reported their exposure to rocket attacks at both time points. There was a significant interaction between social support and exposure to rocket attacks predicting depression over time. As hypothesized, baseline levels of social support buffered against the effect of exposure to rocket attacks on increased depression. Conversely, social support was associated with increased depression for adolescents who were not exposed to rocket attacks. Findings highlight the potential importance of community mental health efforts to bolster schools, families, and peer groups as protective resources in times of traumatic stress.

  19. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media, Inc.

  20. Identification of Location Specific Feature Points in a Cardiac Cycle Using a Novel Seismocardiogram Spectrum System.

    PubMed

    Lin, Wen-Yen; Chou, Wen-Cheng; Chang, Po-Cheng; Chou, Chung-Chuan; Wen, Ming-Shien; Ho, Ming-Yun; Lee, Wen-Chen; Hsieh, Ming-Jer; Lin, Chung-Chih; Tsai, Tsai-Hsuan; Lee, Ming-Yih

    2018-03-01

    Seismocardiogram (SCG) or mechanocardiography is a noninvasive cardiac diagnostic method; however, previous studies used only a single sensor to detect cardiac mechanical activities that will not be able to identify location-specific feature points in a cardiac cycle corresponding to the four valvular auscultation locations. In this study, a multichannel SCG spectrum measurement system was proposed and examined for cardiac activity monitoring to overcome problems like, position dependency, time delay, and signal attenuation, occurring in traditional single-channel SCG systems. ECG and multichannel SCG signals were simultaneously recorded in 25 healthy subjects. Cardiac echocardiography was conducted at the same time. SCG traces were analyzed and compared with echocardiographic images for feature point identification. Fifteen feature points were identified in the corresponding SCG traces. Among them, six feature points, including left ventricular lateral wall contraction peak velocity, septal wall contraction peak velocity, transaortic peak flow, transpulmonary peak flow, transmitral ventricular relaxation flow, and transmitral atrial contraction flow were identified. These new feature points were not observed in previous studies because the single-channel SCG could not detect the location-specific signals from other locations due to time delay and signal attenuation. As the results, the multichannel SCG spectrum measurement system can record the corresponding cardiac mechanical activities with location-specific SCG signals and six new feature points were identified with the system. This new modality may help clinical diagnoses of valvular heart diseases and heart failure in the future.

  1. Methane Flux Estimation from Point Sources using GOSAT Target Observation: Detection Limit and Improvements with Next Generation Instruments

    NASA Astrophysics Data System (ADS)

    Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.

    2017-12-01

    Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.

  2. Proof of Concept for the Trajectory-Level Validation Framework for Traffic Simulation Models

    DOT National Transportation Integrated Search

    2017-10-30

    Based on current practices, traffic simulation models are calibrated and validated using macroscopic measures such as 15-minute averages of traffic counts or average point-to-point travel times. For an emerging number of applications, including conne...

  3. 16 CFR 1500.48 - Technical requirements for determining a sharp point in toys and other articles intended for use...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Section 1500.48 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT...-quarter times the minor dimension of the probe, recess, or opening, measured from any point in the plane...

  4. A summary of measured hydraulic data for the series of steady and unsteady flow experiments over patterned roughness

    USGS Publications Warehouse

    Collins, Dannie L.; Flynn, Kathleen M.

    1979-01-01

    This report summarizes and makes available to other investigators the measured hydraulic data collected during a series of experiments designed to study the effect of patterned bed roughness on steady and unsteady open-channel flow. The patterned effect of the roughness was obtained by clear-cut mowing of designated areas of an otherwise fairly dense coverage of coastal Bermuda grass approximately 250 mm high. All experiments were conducted in the Flood Plain Simulation Facility during the period of October 7 through December 12, 1974. Data from 18 steady flow experiments and 10 unsteady flow experiments are summarized. Measured data included are ground-surface elevations, grass heights and densities, water-surface elevations and point velocities for all experiments. Additional tables of water-surface elevations and measured point velocities are included for the clear-cut areas for most experiments. One complete set of average water-surface elevations and one complete set of measured point velocities are tabulated for each steady flow experiment. Time series data, on a 2-minute time interval, are tabulated for both water-surface elevations and point velocities for each unsteady flow experiment. All data collected, including individual records of water-surface elevations for the steady flow experiments, have been stored on computer disk storage and can be retrieved using the computer programs listed in the attachment to this report. (Kosco-USGS)

  5. A Scanning Quantum Cryogenic Atom Microscope

    NASA Astrophysics Data System (ADS)

    Lev, Benjamin

    Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity, high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented DC-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (2 um), or 6 nT / Hz1 / 2 per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly one-hundred points with an effective field sensitivity of 600 pT / Hz1 / 2 each point during the same time as a point-by-point scanner would measure these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly two orders of magnitude improvement in magnetic flux sensitivity (down to 10- 6 Phi0 / Hz1 / 2) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are for the first time carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns and done so using samples that may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge transport images at temperatures from room to \\x9D4K in unconventional superconductors and topologically nontrivial materials.

  6. Fine Grained Chaos in AdS2 Gravity

    NASA Astrophysics Data System (ADS)

    Haehl, Felix M.; Rozali, Moshe

    2018-03-01

    Quantum chaos can be characterized by an exponential growth of the thermal out-of-time-order four-point function up to a scrambling time u^*. We discuss generalizations of this statement for certain higher-point correlation functions. For concreteness, we study the Schwarzian theory of a one-dimensional time reparametrization mode, which describes two-dimensional anti-de Sitter space (AdS2 ) gravity and the low-energy dynamics of the Sachdev-Ye-Kitaev model. We identify a particular set of 2 k -point functions, characterized as being both "maximally braided" and "k -out of time order," which exhibit exponential growth until progressively longer time scales u^*(k)˜(k -1 )u^*. We suggest an interpretation as scrambling of increasingly fine grained measures of quantum information, which correspondingly take progressively longer time to reach their thermal values.

  7. Fine Grained Chaos in AdS_{2} Gravity.

    PubMed

    Haehl, Felix M; Rozali, Moshe

    2018-03-23

    Quantum chaos can be characterized by an exponential growth of the thermal out-of-time-order four-point function up to a scrambling time u[over ^]_{*}. We discuss generalizations of this statement for certain higher-point correlation functions. For concreteness, we study the Schwarzian theory of a one-dimensional time reparametrization mode, which describes two-dimensional anti-de Sitter space (AdS_{2}) gravity and the low-energy dynamics of the Sachdev-Ye-Kitaev model. We identify a particular set of 2k-point functions, characterized as being both "maximally braided" and "k-out of time order," which exhibit exponential growth until progressively longer time scales u[over ^]_{*}^{(k)}∼(k-1)u[over ^]_{*}. We suggest an interpretation as scrambling of increasingly fine grained measures of quantum information, which correspondingly take progressively longer time to reach their thermal values.

  8. High heat flux measurements and experimental calibrations/characterizations

    NASA Technical Reports Server (NTRS)

    Kidd, Carl T.

    1992-01-01

    Recent progress in techniques employed in the measurement of very high heat-transfer rates in reentry-type facilities at the Arnold Engineering Development Center (AEDC) is described. These advances include thermal analyses applied to transducer concepts used to make these measurements; improved heat-flux sensor fabrication methods, equipment, and procedures for determining the experimental time response of individual sensors; performance of absolute heat-flux calibrations at levels above 2,000 Btu/cu ft-sec (2.27 kW/cu cm); and innovative methods of performing in-situ run-to-run characterizations of heat-flux probes installed in the test facility. Graphical illustrations of the results of extensive thermal analyses of the null-point calorimeter and coaxial surface thermocouple concepts with application to measurements in aerothermal test environments are presented. Results of time response experiments and absolute calibrations of null-point calorimeters and coaxial thermocouples performed in the laboratory at intermediate to high heat-flux levels are shown. Typical AEDC high-enthalpy arc heater heat-flux data recently obtained with a Calspan-fabricated null-point probe model are included.

  9. Data Processing and Quality Evaluation of a Boat-Based Mobile Laser Scanning System

    PubMed Central

    Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri

    2013-01-01

    Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0–1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data. PMID:24048340

  10. Data processing and quality evaluation of a boat-based mobile laser scanning system.

    PubMed

    Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri

    2013-09-17

    Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0-1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data.

  11. Measuring Multiple Resistances Using Single-Point Excitation

    NASA Technical Reports Server (NTRS)

    Hall, Dan; Davies, Frank

    2009-01-01

    In a proposed method of determining the resistances of individual DC electrical devices connected in a series or parallel string, no attempt would be made to perform direct measurements on individual devices. Instead, (1) the devices would be instrumented by connecting reactive circuit components in parallel and/or in series with the devices, as appropriate; (2) a pulse or AC voltage excitation would be applied at a single point on the string; and (3) the transient or AC steady-state current response of the string would be measured at that point only. Each reactive component(s) associated with each device would be distinct in order to associate a unique time-dependent response with that device.

  12. Mass Measurements beyond the Major r-Process Waiting Point {sup 80}Zn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baruah, S.; Herlert, A.; Schweikhard, L.

    2008-12-31

    High-precision mass measurements on neutron-rich zinc isotopes {sup 71m,72-81}Zn have been performed with the Penning trap mass spectrometer ISOLTRAP. For the first time, the mass of {sup 81}Zn has been experimentally determined. This makes {sup 80}Zn the first of the few major waiting points along the path of the astrophysical rapid neutron-capture process where neutron-separation energy and neutron-capture Q-value are determined experimentally. The astrophysical conditions required for this waiting point and its associated abundance signatures to occur in r-process models can now be mapped precisely. The measurements also confirm the robustness of the N=50 shell closure for Z=30.

  13. The vertical correction of point cloud strips performed over the coastal zone of changing sea level

    NASA Astrophysics Data System (ADS)

    Gasińska-Kolyszko, Ewa; Furmańczyk, Kazimierz

    2017-10-01

    The main principle of LIDAR is to measure the accurate time of the laser pulses sent from the system to the target surface. In the operation, laser pulses gradually scan the water surface and in combination with aircraft speed they should perform almost simultaneous soundings of each strip. Vectors sent from aircraft to the Sea are linked to the position of the aircraft. Coordinates of the points - X, Y, Z, are calculated at the time of each measurement. LIDAR crosses the surface of the sea while other impulses pass through the water column and, depending on the depth of the water, reflect from the seabed. Optical receiver on board of the aircraft detects pulse reflections from the seabed and sea surface. On the tidal water basins lidar strips must be adjusted by the changes in sea level. The operation should be reduced to a few hours during low water level. Typically, a surface of 20 to 30 km2 should be covered in an hour. The Baltic Sea is an inland sea, and the surveyed area is located in its South - western part, where meteorological and hydrological conditions affect the sea level changes in a short period of time. A lidar measurement of sea surface, that was done within 2 days, in the coastal zone of the Baltic Sea and the sea level measured 6 times a day at 8, 12, 16, 20, 00, 04 by a water gauge located in the port of Dziwnów (Poland) were used for this study. On the basis of the lidar data, strips were compared with each other. Calculation of time measurement was made for each single line separately. Profiles showing the variability of sea level for each neighboring and overlapping strips were generated. Differences were calculated changes in sea level were identified and on such basis, an adjustment was possible to perform. Microstation software and terrasolid application were used during the research. The latter allowed automatically and manual classification of the point cloud. A sea surface class was distinguished that way. Point cloud was adjusted to flight lines in terms of time and then compared.

  14. Acoustic systems for the measurement of streamflow

    USGS Publications Warehouse

    Laenen, Antonius; Smith, Winchell

    1983-01-01

    The acoustic velocity meter (AVM), also referred to as an ultrasonic flowmeter, has been an operational tool for the measurement of streamflow since 1965. Very little information is available concerning AVM operation, performance, and limitations. The purpose of this report is to consolidate information in such a manner as to provide a better understanding about the application of this instrumentation to streamflow measurement. AVM instrumentation is highly accurate and nonmechanical. Most commercial AVM systems that measure streamflow use the time-of-travel method to determine a velocity between two points. The systems operate on the principle that point-to-point upstream travel-time of sound is longer than the downstream travel-time, and this difference can be monitored and measured accurately by electronics. AVM equipment has no practical upper limit of measurable velocity if sonic transducers are securely placed and adequately protected. AVM systems used in streamflow measurement generally operate with a resolution of ?0.01 meter per second but this is dependent on system frequency, path length, and signal attenuation. In some applications the performance of AVM equipment may be degraded by multipath interference, signal bending, signal attenuation, and variable streamline orientation. Presently used minicomputer systems, although expensive to purchase and maintain, perform well. Increased use of AVM systems probably will be realized as smaller, less expensive, and more conveniently operable microprocessor-based systems become readily available. Available AVM equipment should be capable of flow measurement in a wide variety of situations heretofore untried. New signal-detection techniques and communication linkages can provide additional flexibility to the systems so that operation is possible in more river and estuary situations.

  15. Mineral content changes in bone associated with damage induced by the electron beam.

    PubMed

    Bloebaum, Roy D; Holmes, Jennifer L; Skedros, John G

    2005-01-01

    Energy-dispersive x-ray (EDX) spectroscopy and backscattered electron (BSE) imaging are finding increased use for determining mineral content in microscopic regions of bone. Electron beam bombardment, however, can damage the tissue, leading to erroneous interpretations of mineral content. We performed elemental (EDX) and mineral content (BSE) analyses on bone tissue in order to quantify observable deleterious effects in the context of (1) prolonged scanning time, (2) scan versus point (spot) mode, (3) low versus high magnification, and (4) embedding in poly-methylmethacrylate (PMMA). Undemineralized cortical bone specimens from adult human femora were examined in three groups: 200x embedded, 200x unembedded, and 1000x embedded. Coupled BSE/EDX analyses were conducted five consecutive times, with no location analyzed more than five times. Variation in the relative proportions of calcium (Ca), phosphorous (P), and carbon (C) were measured using EDX spectroscopy, and mineral content variations were inferred from changes in mean gray levels ("atomic number contrast") in BSE images captured at 20 keV. In point mode at 200x, the embedded specimens exhibited a significant increase in Ca by the second measurement (7.2%, p < 0.05); in scan mode, a small and statistically nonsignificant increase (1.0%) was seen by the second measurement. Changes in P were similar, although the increases were less. The apparent increases in Ca and P likely result from decreases in C: -3.2% (p < 0.05) in point mode and -0.3% in scan mode by the second measurement. Analysis of unembedded specimens showed similar results. In contrast to embedded specimens at 200x, 1000x data showed significantly larger variations in the proportions of Ca, P, and C by the second or third measurement in scan and point mode. At both magnifications, BSE image gray level values increased (suggesting increased mineral content) by the second measurement, with increases up to 23% in point mode. These results show that mineral content measurements can be reliable when using coupled BSE/EDX analyses in PMMA-embedded bone if lower magnifications are used in scan mode and if prolonged exposure to the electron beam is avoided. When point mode is used to analyze minute regions, adjustments in accelerating voltages and probe current may be required to minimize damage.

  16. The use of low density high accuracy (LDHA) data for correction of high density low accuracy (HDLA) point cloud

    NASA Astrophysics Data System (ADS)

    Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.

    2016-06-01

    Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.

  17. Time Reversal Methods for Structural Health Monitoring of Metallic Structures Using Guided Waves

    DTIC Science & Technology

    2011-09-01

    measure elastic properties of thin isotropic materials and laminated composite plates. Two types of waves propagate a symmetric wave and antisymmetric...compare it to the original signal. In this time reversal procedure wave propagation from point-A to point-B and can be modeled as a convolution ...where * is the convolution operator and transducer transmit and receive transfer function are neglected for simplification. In the frequency

  18. Note to Budget Cutters: The Arts Are Good Business--Multiple Studies Point to Arts Education as an Important Economic Engine

    ERIC Educational Resources Information Center

    Olson, Catherine Applefeld

    2009-01-01

    They say desperate times call for desperate measures. But in this time of economic uncertainty, the desperate cutting of budgets for arts funding and, by extension, all types of arts education, including music, is not prudent. That is the consensus of several national and local studies, which converge on a single point--that the arts actually can…

  19. Space-time measurements of oceanic sea states

    NASA Astrophysics Data System (ADS)

    Fedele, Francesco; Benetazzo, Alvise; Gallego, Guillermo; Shih, Ping-Chang; Yezzi, Anthony; Barbariol, Francesco; Ardhuin, Fabrice

    2013-10-01

    Stereo video techniques are effective for estimating the space-time wave dynamics over an area of the ocean. Indeed, a stereo camera view allows retrieval of both spatial and temporal data whose statistical content is richer than that of time series data retrieved from point wave probes. We present an application of the Wave Acquisition Stereo System (WASS) for the analysis of offshore video measurements of gravity waves in the Northern Adriatic Sea and near the southern seashore of the Crimean peninsula, in the Black Sea. We use classical epipolar techniques to reconstruct the sea surface from the stereo pairs sequentially in time, viz. a sequence of spatial snapshots. We also present a variational approach that exploits the entire data image set providing a global space-time imaging of the sea surface, viz. simultaneous reconstruction of several spatial snapshots of the surface in order to guarantee continuity of the sea surface both in space and time. Analysis of the WASS measurements show that the sea surface can be accurately estimated in space and time together, yielding associated directional spectra and wave statistics at a point in time that agrees well with probabilistic models. In particular, WASS stereo imaging is able to capture typical features of the wave surface, especially the crest-to-trough asymmetry due to second order nonlinearities, and the observed shape of large waves are fairly described by theoretical models based on the theory of quasi-determinism (Boccotti, 2000). Further, we investigate space-time extremes of the observed stationary sea states, viz. the largest surface wave heights expected over a given area during the sea state duration. The WASS analysis provides the first experimental proof that a space-time extreme is generally larger than that observed in time via point measurements, in agreement with the predictions based on stochastic theories for global maxima of Gaussian fields.

  20. Body sway, aim point fluctuation and performance in rifle shooters: inter- and intra-individual analysis.

    PubMed

    Ball, Kevin A; Best, Russell J; Wrigley, Tim V

    2003-07-01

    In this study, we examined the relationships between body sway, aim point fluctuation and performance in rifle shooting on an inter- and intra-individual basis. Six elite shooters performed 20 shots under competition conditions. For each shot, body sway parameters and four aim point fluctuation parameters were quantified for the time periods 5 s to shot, 3 s to shot and 1 s to shot. Three parameters were used to indicate performance. An AMTI LG6-4 force plate was used to measure body sway parameters, while a SCATT shooting analysis system was used to measure aim point fluctuation and shooting performance. Multiple regression analysis indicated that body sway was related to performance for four shooters. Also, body sway was related to aim point fluctuation for all shooters. These relationships were specific to the individual, with the strength of association, parameters of importance and time period of importance different for different shooters. Correlation analysis of significant regressions indicated that, as body sway increased, performance decreased and aim point fluctuation increased for most relationships. We conclude that body sway and aim point fluctuation are important in elite rifle shooting and performance errors are highly individual-specific at this standard. Individual analysis should be a priority when examining elite sports performance.

  1. Longitudinal Effects on Early Adolescent Language: A Twin Study

    PubMed Central

    DeThorne, Laura Segebart; Smith, Jamie Mahurin; Betancourt, Mariana Aparicio; Petrill, Stephen A.

    2016-01-01

    Purpose We evaluated genetic and environmental contributions to individual differences in language skills during early adolescence, measured by both language sampling and standardized tests, and examined the extent to which these genetic and environmental effects are stable across time. Method We used structural equation modeling on latent factors to estimate additive genetic, shared environmental, and nonshared environmental effects on variance in standardized language skills (i.e., Formal Language) and productive language-sample measures (i.e., Productive Language) in a sample of 527 twins across 3 time points (mean ages 10–12 years). Results Individual differences in the Formal Language factor were influenced primarily by genetic factors at each age, whereas individual differences in the Productive Language factor were primarily due to nonshared environmental influences. For the Formal Language factor, the stability of genetic effects was high across all 3 time points. For the Productive Language factor, nonshared environmental effects showed low but statistically significant stability across adjacent time points. Conclusions The etiology of language outcomes may differ substantially depending on assessment context. In addition, the potential mechanisms for nonshared environmental influences on language development warrant further investigation. PMID:27732720

  2. MSL: A Measure to Evaluate Three-dimensional Patterns in Gene Expression Data

    PubMed Central

    Gutiérrez-Avilés, David; Rubio-Escudero, Cristina

    2015-01-01

    Microarray technology is highly used in biological research environments due to its ability to monitor the RNA concentration levels. The analysis of the data generated represents a computational challenge due to the characteristics of these data. Clustering techniques are widely applied to create groups of genes that exhibit a similar behavior. Biclustering relaxes the constraints for grouping, allowing genes to be evaluated only under a subset of the conditions. Triclustering appears for the analysis of longitudinal experiments in which the genes are evaluated under certain conditions at several time points. These triclusters provide hidden information in the form of behavior patterns from temporal experiments with microarrays relating subsets of genes, experimental conditions, and time points. We present an evaluation measure for triclusters called Multi Slope Measure, based on the similarity among the angles of the slopes formed by each profile formed by the genes, conditions, and times of the tricluster. PMID:26124630

  3. A micro-CMM with metrology frame for low uncertainty measurements

    NASA Astrophysics Data System (ADS)

    Brand, Uwe; Kirchhoff, Juergen

    2005-12-01

    A conventional bridge-type coordinate measuring machine (CMM) with an opto-tactile fibre probe for the measurement of microstructures has been equipped with a metrology frame in order to reduce its measurement uncertainty. The frame contains six laser interferometers for high-precision position and guiding deviation measurements, a Zerodur cuboid with three measuring surfaces for the laser interferometers to which the fibre probe is fixed, and an invar frame which supports the measuring objects and to which the reference mirrors of the interferometers are fixed. The orthogonality and flatness deviations of the Zerodur measuring surfaces have been measured and taken into account in the equation of motion of the probing sphere. As a first performance test, the flatness of an optical flat has been measured with the fibre probe. Measuring-depth-dependent and probing-force-dependent shifts of the probing position were observed. In order to reduce the scattering of the probing points, 77 measurements were averaged for one coordinate point to be measured. This has led to measuring times of several hours for one plane and strong thermal drifts of the measured probing points.

  4. Measuring the Developing Therapeutic Relationship Between Pregnant Women and Community Health Workers Over the Course of the Pregnancy in a Study Intervention

    PubMed Central

    Lichtveld, Maureen Y.; Shankar, Arti; Mundorf, Chris; Hassan, Anna; Drury, Stacy

    2016-01-01

    The Scale to Assess the Therapeutic Relationship in Community Mental Health Care (STAR) is a frequently-administered tool for measuring therapeutic relationships between clinicians and patients. This manuscript tested the STAR’s psychometric properties within a community health worker (CHW)-led intervention study involving pregnant and postpartum women. Women (n = 141) enrolled in the study completed the 12-item participant STAR survey (STAR-P) at two time points over the course of pregnancy and at two time points after delivery. The factor structure of the STAR-P proved to be unstable with this population. However, a revised 9-item STAR-P revealed a two-factor model of positive and negative interactions, and demonstrated strong internal consistency at postpartum time points. The revised STAR-P shows strong psychometric properties, and is suitable for use to evaluate the relationship developed between CHWs and pregnant and postpartum women in an intervention program. PMID:27116361

  5. Measuring the Developing Therapeutic Relationship Between Pregnant Women and Community Health Workers Over the Course of the Pregnancy in a Study Intervention.

    PubMed

    Lichtveld, Maureen Y; Shankar, Arti; Mundorf, Chris; Hassan, Anna; Drury, Stacy

    2016-12-01

    The Scale to Assess the Therapeutic Relationship in Community Mental Health Care (STAR) is a frequently-administered tool for measuring therapeutic relationships between clinicians and patients. This manuscript tested the STAR's psychometric properties within a community health worker (CHW)-led intervention study involving pregnant and postpartum women. Women (n = 141) enrolled in the study completed the 12-item participant STAR survey (STAR-P) at two time points over the course of pregnancy and at two time points after delivery. The factor structure of the STAR-P proved to be unstable with this population. However, a revised 9-item STAR-P revealed a two-factor model of positive and negative interactions, and demonstrated strong internal consistency at postpartum time points. The revised STAR-P shows strong psychometric properties, and is suitable for use to evaluate the relationship developed between CHWs and pregnant and postpartum women in an intervention program.

  6. Inhomogeneous point-process entropy: An instantaneous measure of complexity in discrete systems

    NASA Astrophysics Data System (ADS)

    Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2014-05-01

    Measures of entropy have been widely used to characterize complexity, particularly in physiological dynamical systems modeled in discrete time. Current approaches associate these measures to finite single values within an observation window, thus not being able to characterize the system evolution at each moment in time. Here, we propose a new definition of approximate and sample entropy based on the inhomogeneous point-process theory. The discrete time series is modeled through probability density functions, which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through probability functions, the novel indices are able to provide instantaneous tracking of the system complexity. The new measures are tested on synthetic data, as well as on real data gathered from heartbeat dynamics of healthy subjects and patients with cardiac heart failure and gait recordings from short walks of young and elderly subjects. Results show that instantaneous complexity is able to effectively track the system dynamics and is not affected by statistical noise properties.

  7. Treatment of hyperthyroidism with radioiodine targeted activity: A comparison between two dosimetric methods.

    PubMed

    Amato, Ernesto; Campennì, Alfredo; Leotta, Salvatore; Ruggeri, Rosaria M; Baldari, Sergio

    2016-06-01

    Radioiodine therapy is an effective and safe treatment of hyperthyroidism due to Graves' disease, toxic adenoma, toxic multinodular goiter. We compared the outcomes of a traditional calculation method based on an analytical fit of the uptake curve and subsequent dose calculation with the MIRD approach, and an alternative computation approach based on a formulation implemented in a public-access website, searching for the best timing of radioiodine uptake measurements in pre-therapeutic dosimetry. We report about sixty-nine hyperthyroid patients that were treated after performing a pre-therapeutic dosimetry calculated by fitting a six-point uptake curve (3-168h). In order to evaluate the results of the radioiodine treatment, patients were followed up to sixty-four months after treatment (mean 47.4±16.9). Patient dosimetry was then retrospectively recalculated with the two above-mentioned methods. Several time schedules for uptake measurements were considered, with different timings and total number of points. Early time schedules, sampling uptake up to 48h, do not allow to set-up an accurate treatment plan, while schedules including the measurement at one week give significantly better results. The analytical fit procedure applied to the three-point time schedule 3(6)-24-168h gave results significantly more accurate than the website approach exploiting either the same schedule, or the single measurement at 168h. Consequently, the best strategy among the ones considered is to sample the uptake at 3(6)-24-168h, and carry out an analytical fit of the curve, while extra measurements at 48 and 72h lead only marginal improvements in the accuracy of therapeutic activity determination. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Normalization methods in time series of platelet function assays

    PubMed Central

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  9. An Improved Method for Real-Time 3D Construction of DTM

    NASA Astrophysics Data System (ADS)

    Wei, Yi

    This paper discusses the real-time optimal construction of DTM by two measures. One is to improve coordinate transformation of discrete points acquired from lidar, after processing a total number of 10000 data points, the formula calculation for transformation costs 0.810s, while the table look-up method for transformation costs 0.188s, indicating that the latter is superior to the former. The other one is to adjust the density of the point cloud acquired from lidar, the certain amount of the data points are used for 3D construction in proper proportion in order to meet different needs for 3D imaging, and ultimately increase efficiency of DTM construction while saving system resources.

  10. Topical nasal decongestant oxymetazoline (0.05%) provides relief of nasal symptoms for 12 hours.

    PubMed

    Druce, H M; Ramsey, D L; Karnati, S; Carr, A N

    2018-05-22

    Nasal congestion, often referred to as stuffy nose or blocked nose is one of the most prevalent and bothersome symptoms of an upper respiratory tract infection. Oxymetazoline, a widely used intranasal decongestant, offers fast symptom relief, but little is known about the duration of effect. The results of 2 randomized, double-blind, vehicle-controlled, single-dose, parallel, clinical studies (Study 1, n=67; Study 2, n=61) in which the efficacy of an oxymetazoline (0.05% Oxy) nasal spray in patients with acute coryzal rhinitis was assessed over a 12-hour time-period. Data were collected on both subjective relief of nasal congestion (6-point nasal congestion scale) and objective measures of nasal patency (anterior rhinomanometry) in both studies. A pooled study analysis showed statistically significant changes from baseline in subjective nasal congestion for 0.05% oxymetazoline and vehicle at each hourly time-point from Hour 1 through Hour 12 (marginally significant at Hour 11). An objective measure of nasal flow was statistically significant at each time-point up to 12 hours. Adverse events on either treatment were infrequent. The number of subjects who achieved an improvement in subjective nasal congestion scores of at least 1.0 was significantly higher in the Oxy group vs. vehicle at all hourly time-points on a 6-point nasal congestion scale. This study shows for the first time, that oxymetazoline provides both statistically significant and clinically meaningful relief of nasal congestion and improves nasal airflow for up to 12 hours following a single dose.

  11. Causality in time-neutral cosmologies

    NASA Astrophysics Data System (ADS)

    Kent, Adrian

    1999-02-01

    Gell-Mann and Hartle (GMH) have recently considered time-neutral cosmological models in which the initial and final conditions are independently specified, and several authors have investigated experimental tests of such models. We point out here that GMH time-neutral models can allow superluminal signaling, in the sense that it can be possible for observers in those cosmologies, by detecting and exploiting regularities in the final state, to construct devices which send and receive signals between space-like separated points. In suitable cosmologies, any single superluminal message can be transmitted with probability arbitrarily close to one by the use of redundant signals. However, the outcome probabilities of quantum measurements generally depend on precisely which past and future measurements take place. As the transmission of any signal relies on quantum measurements, its transmission probability is similarly context dependent. As a result, the standard superluminal signaling paradoxes do not apply. Despite their unusual features, the models are internally consistent. These results illustrate an interesting conceptual point. The standard view of Minkowski causality is not an absolutely indispensable part of the mathematical formalism of relativistic quantum theory. It is contingent on the empirical observation that naturally occurring ensembles can be naturally pre-selected but not post-selected.

  12. Observing Bridge Dynamic Deflection in Green Time by Information Technology

    NASA Astrophysics Data System (ADS)

    Yu, Chengxin; Zhang, Guojian; Zhao, Yongqian; Chen, Mingzhi

    2018-01-01

    As traditional surveying methods are limited to observe bridge dynamic deflection; information technology is adopted to observe bridge dynamic deflection in Green time. Information technology used in this study means that we use digital cameras to photograph the bridge in red time as a zero image. Then, a series of successive images are photographed in green time. Deformation point targets are identified and located by Hough transform. With reference to the control points, the deformation values of these deformation points are obtained by differencing the successive images with a zero image, respectively. Results show that the average measurement accuracies of C0 are 0.46 pixels, 0.51 pixels and 0.74 pixels in X, Z and comprehensive direction. The average measurement accuracies of C1 are 0.43 pixels, 0.43 pixels and 0.67 pixels in X, Z and comprehensive direction in these tests. The maximal bridge deflection is 44.16mm, which is less than 75mm (Bridge deflection tolerance value). Information technology in this paper can monitor bridge dynamic deflection and depict deflection trend curves of the bridge in real time. It can provide data support for the site decisions to the bridge structure safety.

  13. A comparison of speciated atmospheric mercury at an urban center and an upwind rural location

    USGS Publications Warehouse

    Rutter, A.P.; Schauer, J.J.; Lough, G.C.; Snyder, D.C.; Kolb, C.J.; Von Klooster, S.; Rudolf, T.; Manolopoulos, H.; Olson, M.L.

    2008-01-01

    Gaseous elemental mercury (GEM), particulate mercury (PHg) and reactive gaseous mercury (RGM) were measured every other hour at a rural location in south central Wisconsin (Devil's Lake State Park, WI, USA) between April 2003 and March 2004, and at a predominantly downwind urban site in southeastern Wisconsin (Milwaukee, WI, USA) between June 2004 and May 2005. Annual averages of GEM, PHg, and RGM at the urban site were statistically higher than those measured at the rural site. Pollution roses of GEM and reactive mercury (RM; sum of PHg and RGM) at the rural and urban sites revealed the influences of point source emissions in surrounding counties that were consistent with the US EPA 1999 National Emission Inventory and the 2003-2005 US EPA Toxics Release Inventory. Source-receptor relationships at both sites were studied by quantifying the impacts of point sources on mercury concentrations. Time series of GEM, PHg, and RGM concentrations were sorted into two categories; time periods dominated by impacts from point sources, and time periods dominated by mercury from non-point sources. The analysis revealed average point source contributions to GEM, PHg, and RGM concentration measurements to be significant over the year long studies. At the rural site, contributions to annual average concentrations were: GEM (2%; 0.04 ng m-3); and, RM (48%; 5.7 pg m-3). At the urban site, contributions to annual average concentrations were: GEM (33%; 0.81 ng m-3); and, RM (64%; 13.8 pg m-3). ?? The Royal Society of Chemistry.

  14. Language development of internationally adopted children: Adverse early experiences outweigh the age of acquisition effect.

    PubMed

    Rakhlin, Natalia; Hein, Sascha; Doyle, Niamh; Hart, Lesley; Macomber, Donna; Ruchkin, Vladislav; Tan, Mei; Grigorenko, Elena L

    2015-01-01

    We compared English language and cognitive skills between internationally adopted children (IA; mean age at adoption=2.24, SD=1.8) and their non-adopted peers from the US reared in biological families (BF) at two time points. We also examined the relationships between outcome measures and age at initial institutionalization, length of institutionalization, and age at adoption. On measures of general language, early literacy, and non-verbal IQ, the IA group performed significantly below their age-peers reared in biological families at both time points, but the group differences disappeared on receptive vocabulary and kindergarten concept knowledge at the second time point. Furthermore, the majority of children reached normative age expectations between 1 and 2 years post-adoption on all standardized measures. Although the age at adoption, age of institutionalization, length of institutionalization, and time in the adoptive family all demonstrated significant correlations with one or more outcome measures, the negative relationship between length of institutionalization and child outcomes remained most robust after controlling for the other variables. Results point to much flexibility and resilience in children's capacity for language acquisition as well as the potential primacy of length of institutionalization in explaining individual variation in IA children's outcomes. (1) Readers will be able to understand the importance of pre-adoption environment on language and early literacy development in internationally adopted children. (2) Readers will be able to compare the strength of the association between the length of institutionalization and language outcomes with the strength of the association between the latter and the age at adoption. (3) Readers will be able to understand that internationally adopted children are able to reach age expectations on expressive and receptive language measures despite adverse early experiences and a replacement of their first language with an adoptive language. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. [Effect of hydroxyethyl starch 130/0.4 on S100B protein level and cerebral oxygen metabolism in open cardiac surgery under cardiopulmonary bypass].

    PubMed

    Pi, Zhi-bing; Tan, Guan-xian; Wang, Jun-lu

    2007-07-17

    To observe the effect of hydroxyethyl starch (HES) 130/0.4 on S100B protein level and cerebral metabolism of oxygen in open cardiac surgery under cardiopulmonary bypass (CPB) and to explore whether it has the protective effect of 6%HES130/0.4 as priming solution on cerebral injury during CPB and explore the probable mechanism. Forty patients with atrioseptal defect or ventricular septal defect scheduled for elective surgical repair under CPB with moderate hypothermia were randomly divided into two equal groups: HES 130/0.4 group (HES group) in which HES 130/0.4 (voluven) was used as priming solution and gelatin group (GRL group) in which gelofusine (succinylated gelatin) was used as priming solution. ECG, heart rate (HR), blood pressure (BP), mean arterial pressure (MAP), central venous pressure (CVP), arterial partial pressure of oxygen (P(a)O(2),), arterial partial pressure of carbon dioxide (P(et)CO(2)) and body temperature (naso-pharyngeal and rectal) were continuously monitored during the operation. Blood samples were obtained from the central vein for determination of blood concentrations of S100B protein at the following time points: before CPB (T(0)), 20 minutes after the beginning of CPB (T(1)), immediately after the termination of CPB (T(2)), 60 minutes after the termination of CPB (T(3)), and 24 hours after the termination of CPB (T(4)). The serum S100B protein levels were measured by ELISA. At the same time points blood samples were obtained from the jugular vein and radial artery to undergo blood gas analysis and measurement of blood glucose, based on which the cerebral oxygen metabolic rate/cerebral metabolic rate of glucose (CMRO(2)/CMR(GLU)) was calculated. Compared with the time point of immediately before CPB (T(0)), The S100B protein level of the 2 groups began to increase since the time point T(1), peaked at the time point T(2), began to decrease gradually since the time point T(3), and were still significantly higher than those before CPB at the time point T(4) (all P < 0.01), and the S100B protein levels at different time points of the HES group were all significantly lower than those of the GEL group (all P < 0.01). The S(jv)O(2) and CMRO(2)/CMR(GLU) levels of both groups increased at the time point T(1), decreased at the time points T(2) and T(3), and then restored to normal at the time points T(4). In the GEL group there were no significant differences in the levels between any 2 different time points, however, in the HES group S(jv)O(2) and CMRO(2)/CMR(GLU) levels at T(1) was significantly higher than those at the other time points (P < 0.05 or P < 0.01). S100B protein increases significantly in open cardiac surgery under CPB. HES130/0.4 lowers the S100B protein levels from the beginning of CPB to one hour after the termination of CPB with the probable mechanism of improving the cerebral metabolism of oxygen. 6%HES130/0.4 as priming solution may play a protective role in reduction of cerebral injury during CPB and open cardiac surgery.

  16. Evaluating Mass Analyzers as Candidates for Small, Portable, Rugged Single Point Mass Spectrometers for Analysis of Permanent Gases

    NASA Technical Reports Server (NTRS)

    Arkin, C. Richard; Ottens, Andrew K.; Diaz, Jorge A.; Griffin, Timothy P.; Follestein, Duke; Adams, Fredrick; Steinrock, T. (Technical Monitor)

    2001-01-01

    For Space Shuttle launch safety, there is a need to monitor the concentration of H2, He, O2 and Ar around the launch vehicle. Currently a large mass spectrometry system performs this task, using long transport lines to draw in samples. There is great interest in replacing this stationary system with several miniature, portable, rugged mass spectrometers which act as point sensors which can be placed at the sampling point. Five commercial and two non-commercial analyzers are evaluated. The five commercial systems include the Leybold Inficon XPR-2 linear quadrupole, the Stanford Research (SRS-100) linear quadrupole, the Ferran linear quadrupole array, the ThermoQuest Polaris-Q quadrupole ion trap, and the IonWerks Time-of-Flight (TOF). The non-commercial systems include a compact double focusing sector (CDFMS) developed at the University of Minnesota, and a quadrupole ion trap (UF-IT) developed at the University of Florida. The System Volume is determined by measuring the entire system volume including the mass analyzer, its associated electronics, the associated vacuum system, the high vacuum pump and rough pump. Also measured are any ion gauge controllers or other required equipment. Computers are not included. Scan Time is the time required for one scan to be acquired and the data to be transferred. It is determined by measuring the time required acquiring a known number of scans and dividing by said number of scans. Limit of Detection is determined first by performing a zero-span calibration (using a 10-point data set). Then the limit of detection (LOD) is defined as 3 times the standard deviation of the zero data set. (An LOD of 10 ppm or less is considered acceptable.)

  17. Automated measurement of cattle surface temperature and its correlation with rectal temperature

    PubMed Central

    Ren, Kang; Chen, XiaoLi; Lu, YongQiang; Wang, Dong

    2017-01-01

    The body temperature of cattle varies regularly with both the reproductive cycle and disease status. Establishing an automatic method for monitoring body temperature may facilitate better management of reproduction and disease control in cattle. Here, we developed an Automatic Measurement System for Cattle’s Surface Temperature (AMSCST) to measure the temperature of metatarsus by attaching a special shell designed to fit the anatomy of cattle’s hind leg. Using AMSCST, the surface temperature (ST) on the metatarsus of the hind leg was successively measured during 24 hours a day with an interval of one hour in three tested seasons. Based on ST and rectal temperature (RT) detected by AMSCST and mercury thermometer, respectively, a linear mixed model was established, regarding both the time point and seasonal factors as the fixed effects. Unary linear correlation and Bland-Altman analysis results indicated that the temperatures measured by AMSCST were closely correlated to those measured by mercury thermometer (R2 = 0.998), suggesting that the AMSCST is an accurate and reliable way to detect cattle’s body temperature. Statistical analysis showed that the differences of STs among the three seasons, or among the different time points were significant (P<0.05), and the differences of RTs among the different time points were similarly significant (P<0.05). The prediction accuracy of the mixed model was verified by 10-fold cross validation. The average difference between measured RT and predicted RT was about 0.10 ± 0.10°C with the association coefficient of 0.644, indicating the feasibility of this model in measuring cattle body temperature. Therefore, an automated technology for accurately measuring cattle body temperature was accomplished by inventing an optimal device and establishing the AMSCST system. PMID:28426682

  18. Statistical process control: A feasibility study of the application of time-series measurement in early neurorehabilitation after acquired brain injury.

    PubMed

    Markovic, Gabriela; Schult, Marie-Louise; Bartfai, Aniko; Elg, Mattias

    2017-01-31

    Progress in early cognitive recovery after acquired brain injury is uneven and unpredictable, and thus the evaluation of rehabilitation is complex. The use of time-series measurements is susceptible to statistical change due to process variation. To evaluate the feasibility of using a time-series method, statistical process control, in early cognitive rehabilitation. Participants were 27 patients with acquired brain injury undergoing interdisciplinary rehabilitation of attention within 4 months post-injury. The outcome measure, the Paced Auditory Serial Addition Test, was analysed using statistical process control. Statistical process control identifies if and when change occurs in the process according to 3 patterns: rapid, steady or stationary performers. The statistical process control method was adjusted, in terms of constructing the baseline and the total number of measurement points, in order to measure a process in change. Statistical process control methodology is feasible for use in early cognitive rehabilitation, since it provides information about change in a process, thus enabling adjustment of the individual treatment response. Together with the results indicating discernible subgroups that respond differently to rehabilitation, statistical process control could be a valid tool in clinical decision-making. This study is a starting-point in understanding the rehabilitation process using a real-time-measurements approach.

  19. Repeated measurements of mite and pet allergen levels in house dust over a time period of 8 years.

    PubMed

    Antens, C J M; Oldenwening, M; Wolse, A; Gehring, U; Smit, H A; Aalberse, R C; Kerkhof, M; Gerritsen, J; de Jongste, J C; Brunekreef, B

    2006-12-01

    Studies of the association between indoor allergen exposure and the development of allergic diseases have often measured allergen exposure at one point in time. We investigated the variability of house dust mite (Der p 1, Der f 1) and cat (Fel d 1) allergen in Dutch homes over a period of 8 years. Data were obtained in the Dutch PIAMA birth cohort study. Dust from the child's mattress, the parents' mattress and the living room floor was collected at four points in time, when the child was 3 months, 4, 6 and 8 years old. Dust samples were analysed for Der p 1, Der f 1 and Fel d 1 by sandwich enzyme immuno assay. Mite allergen concentrations for the child's mattress, the parents' mattress and the living room floor were moderately correlated between time-points. Agreement was better for cat allergen. For Der p 1 and Der f 1 on the child's mattress, the within-home variance was close to or smaller than the between-home variance in most cases. For Fel d 1, the within-home variance was almost always smaller than the between-home variance. Results were similar for allergen levels expressed per gram of dust and allergen levels expressed per square metre of the sampled surface. Variance ratios were smaller when samples were taken at shorter time intervals than at longer time intervals. Over a period of 4 years, mite and cat allergens measured in house dust are sufficiently stable to use single measurements with confidence in epidemiological studies. The within-home variance was larger when samples were taken 8 years apart so that over such long periods, repetition of sampling is recommended.

  20. Use of a large time-compensated scintillation detector in neutron time-of-flight measurements

    DOEpatents

    Goodman, Charles D.

    1979-01-01

    A scintillator for neutron time-of-flight measurements is positioned at a desired angle with respect to the neutron beam, and as a function of the energy thereof, such that the sum of the transit times of the neutrons and photons in the scintillator are substantially independent of the points of scintillations within the scintillator. Extrapolated zero timing is employed rather than the usual constant fraction timing. As a result, a substantially larger scintillator can be employed that substantially increases the data rate and shortens the experiment time.

  1. Biomarkers and biometric measures of adherence to use of ARV-based vaginal rings.

    PubMed

    Stalter, Randy M; Moench, Thomas R; MacQueen, Kathleen M; Tolley, Elizabeth E; Owen, Derek H

    2016-01-01

    Poor adherence to product use has been observed in recent trials of antiretroviral (ARV)-based oral and vaginal gel HIV prevention products, resulting in an inability to determine product efficacy. The delivery of microbicides through vaginal rings is widely perceived as a way to achieve better adherence but vaginal rings do not eliminate the adherence challenges exhibited in clinical trials. Improved objective measures of adherence are needed as new ARV-based vaginal ring products enter the clinical trial stage. To identify technologies that have potential future application for vaginal ring adherence measurement, a comprehensive literature search was conducted that covered a number of biomedical and public health databases, including PubMed, Embase, POPLINE and the Web of Science. Published patents and patent applications were also searched. Technical experts were also consulted to gather more information and help evaluate identified technologies. Approaches were evaluated as to feasibility of development and clinical trial implementation, cost and technical strength. Numerous approaches were identified through our landscape analysis and classified as either point measures or cumulative measures of vaginal ring adherence. Point measurements are those that give a measure of adherence at a particular point in time. Cumulative measures attempt to measure ring adherence over a period of time. Approaches that require modifications to an existing ring product are at a significant disadvantage, as this will likely introduce additional regulatory barriers to the development process and increase manufacturing costs. From the point of view of clinical trial implementation, desirable attributes would be high acceptance by trial participants, and little or no additional time or training requirements on the part of participants or clinic staff. We have identified four promising approaches as being high priority for further development based on the following measurements: intracellular drug levels, drug levels in hair, the accumulation of a vaginal analyte that diffuses into the ring, and the depletion of an intrinsic ring constituent. While some approaches show significant promise over others, it is recommended that a strategy of using complementary biometric and behavioural approaches be adopted to best understand participants' adherence to ARV-based ring products in clinical trials.

  2. Water content measurement in forest soils and decayed wood using time domain reflectometry

    Treesearch

    Andrew Gray; Thomas Spies

    1995-01-01

    The use of time domain reflectometry to measure moisture content in forest soils and woody debris was evaluated. Calibrations were developed on undisturbed soil cores from four forest stands and on point samples from decayed logs. An algorithm for interpreting irregularly shaped traces generated by the reflectometer was also developed. Two different calibration...

  3. Estimating vehicle height using homographic projections

    DOEpatents

    Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter

    2013-07-16

    Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.

  4. Exploratory study of serum ubiquitin carboxyl-terminal esterase L1 and glial fibrillary acidic protein for outcome prognostication after pediatric cardiac arrest.

    PubMed

    Fink, Ericka L; Berger, Rachel P; Clark, Robert S B; Watson, R Scott; Angus, Derek C; Panigrahy, Ashok; Richichi, Rudolph; Callaway, Clifton W; Bell, Michael J; Mondello, Stefania; Hayes, Ronald L; Kochanek, Patrick M

    2016-04-01

    Brain injury is the leading cause of morbidity and death following pediatric cardiac arrest. Serum biomarkers of brain injury may assist in outcome prognostication. The objectives of this study were to evaluate the properties of serum ubiquitin carboxyl-terminal esterase-L1 (UCH-L1) and glial fibrillary acidic protein (GFAP) to classify outcome in pediatric cardiac arrest. Single center prospective study. Serum biomarkers were measured at 2 time points during the initial 72 h in children after cardiac arrest (n=19) and once in healthy children (controls, n=43). We recorded demographics and details of the cardiac arrest and resuscitation. We determined the associations between serum biomarker concentrations and Pediatric Cerebral Performance Category (PCPC) at 6 months (favorable (PCPC 1-3) or unfavorable (PCPC 4-6)). The initial assessment (time point 1) occurred at a median (IQR) of 10.5 (5.5-17.0)h and the second assessment (time point 2) at 59.0 (54.5-65.0)h post-cardiac arrest. Serum UCH-L1 was higher among children following cardiac arrest than among controls at both time points (p<0.05). Serum GFAP in subjects with unfavorable outcome was higher at time point 2 than in controls (p<0.05). Serum UCH-L1 at time point 1 (AUC 0.782) and both UCH-L1 and GFAP at time point 2 had good classification accuracy for outcome (AUC 0.822 and 0.796), p<0.05 for all. Preliminary data suggest that serum UCH-L1 and GFAP may be of use to prognosticate outcome after pediatric cardiac arrest at clinically-relevant time points and should be validated prospectively. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Exploratory Study of Serum Ubiquitin Carboxyl-Terminal Esterase L1 and Glial Fibrillary Acidic Protein for Outcome Prognostication after Pediatric Cardiac Arrest

    PubMed Central

    Fink, Ericka L; Berger, Rachel P; Clark, Robert SB; Watson, R. Scott; Angus, Derek C; Panigrahy, Ashok; Richichi, Rudolph; Callaway, Clifton W; Bell, Michael J; Mondello, Stefania; Hayes, Ronald L.; Kochanek, Patrick M

    2016-01-01

    Introduction Brain injury is the leading cause of morbidity and death following pediatric cardiac arrest. Serum biomarkers of brain injury may assist in outcome prognostication. The objectives of this study were to evaluate the properties of serum ubiquitin carboxyl-terminal esterase-L1 (UCH-L1) and glial fibrillary acidic protein (GFAP) to classify outcome in pediatric cardiac arrest. Methods Single center prospective study. Serum biomarkers were measured at 2 time points during the initial 72 h in children after cardiac arrest (n=19) and once in healthy children (controls, n=43). We recorded demographics and details of the cardiac arrest and resuscitation. We determined the associations between serum biomarker concentrations and Pediatric Cerebral Performance Category (PCPC) at 6 months (favorable (PCPC 1–3) or unfavorable (PCPC 4–6)). Results The initial assessment (time point 1) occurred at a median (IQR) of 10.5 (5.5–17.0) h and the second assessment (time point 2) at 59.0 (54.5–65.0) h post-cardiac arrest. Serum UCH-L1 was higher among children following cardiac arrest than among controls at both time points (p<0.05). Serum GFAP in subjects with unfavorable outcome was higher at time point 2 than in controls (p<0.05). Serum UCH-L1 at time point 1 (AUC 0.782) and both UCH-L1 and GFAP at time point 2 had good classification accuracy for outcome (AUC 0.822 and 0.796), p<0.05 for all. Conclusion Preliminary data suggest that serum UCH-L1 and GFAP may be of use to prognosticate outcome after pediatric cardiac arrest at clinically-relevant time points and should be validated prospectively. PMID:26855294

  6. Comparing Change in Anterior Curvature after Corneal Cross-Linking Using Scanning-Slit and Scheimpflug Technology.

    PubMed

    Lang, Paul Z; Thulasi, Praneetha; Khandelwal, Sumitra S; Hafezi, Farhad; Randleman, J Bradley

    2018-05-02

    To evaluate the correlation between anterior axial curvature difference maps following corneal cross-linking (CXL) for progressive keratoconus obtained from Scheimpflug-based tomography and Placido-based topography. Between-devices reliability analysis of randomized clinical trial data METHODS: Corneal imaging was collected at a single center institution pre-operatively and at 3, 6, and 12 months post-operatively using Scheimpflug-based tomography (Pentacam, Oculus Inc., Lynnwood, WA) and Scanning-slit, Placido-based topography (Orbscan II, Bausch & Lomb, Rochester, NY) in patients with progressive keratoconus receiving standard protocol CXL (3mW/cm 2 for 30 minutes). Regularization index (RI), absolute maximum keratometry (K Max), and change in (ΔK Max) were compared between the two devices at each time point. 51 eyes from 36 patients were evaluated at all time points. values were significantly different at all time points [56.01±5.3D Scheimpflug vs. 55.04±5.1D scanning-slit pre-operatively (p=0.003); 54.58±5.3D Scheimpflug vs. 53.12±4.9D scanning-slit at 12 months (p<0.0001)] but strongly correlated between devices (r=0.90-0.93) at all time points. The devices were not significantly different at any time point for either ΔK Max or RI but were poorly correlated at all time points (r=0.41-0.53 for ΔK Max, r=0.29-0.48 for RI). At 12 months, 95% LOA was 7.51D for absolute, 8.61D for ΔK Max, and 19.86D for RI. Measurements using Scheimpflug and scanning-slit Placido-based technology are correlated but not interchangeable. Both devices appear reasonable for separately monitoring the cornea's response to CXL; however, caution should be used when comparing results obtained with one measuring technology to the other. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Heat-Transfer Measurements on a 5.5- Inch-Diameter Hemispherical Concave Nose in Free Flight at Mach Numbers up to 6.6

    NASA Technical Reports Server (NTRS)

    Levine, Jack; Rumsey, Charles B.

    1958-01-01

    The aerodynamic heat transfer to a hemispherical concave nose has been measured in free flight at Mach numbers from 3.5 to 6.6 with corresponding Reynolds numbers based on nose diameter from 7.4 x 10(exp 6) to 14 x 10(exp 6). Over the test Mach number range the heating on the cup nose, expressed as a ratio to the theoretical stagnation-point heating on a hemisphere nose of the same diameter, varied from 0.05 to 0.13 at the stagnation point of the cup, was approximately 0.1 at other locations within 40 deg of the stagnation point, and varied from 0.6 to 0.8 just inside the lip where the highest heating rates occurred. At a Mach number of 5 the total heat input integrated over the surface of the cup nose including the lip was 0.55 times the theoretical value for a hemisphere nose with laminar boundary layer and 0.76 times that for a flat face. The heating at the stagnation point was approximately 1/5 as great as steady-flow tunnel results. Extremely high heating rates at the stagnation point (on the order of 30 times the stagnation-point values of the present test), which have occurred in conjunction with unsteady oscillatory flow around cup noses in wind-tunnel tests at Mach and Reynolds numbers within the present test range, were not observed.

  8. Reproducibility of electronic tooth colour measurements.

    PubMed

    Ratzmann, Anja; Klinke, Thomas; Schwahn, Christian; Treichel, Anja; Gedrange, Tomasz

    2008-10-01

    Clinical methods of investigation, such as tooth colour determination, should be simple, quick and reproducible. The determination of tooth colours usually relies upon manual comparison of a patient's tooth colour with a colour ring. After some days, however, measurement results frequently lack unequivocal reproducibility. This study aimed to examine an electronic method for reliable colour measurement. The colours of the teeth 14 to 24 were determined by three different examiners in 10 subjects using the colour measuring device Shade Inspector. In total, 12 measurements per tooth were taken. Two measurement time points were scheduled to be taken, namely at study onset (T(1)) and after 6 months (T(2)). At either time point, two measurement series per subject were taken by the different examiners at 2-week intervals. The inter-examiner and intra-examiner agreement of the measurement results was assessed. The concordance for lightness and colour intensity (saturation) was represented by the intra-class correlation coefficient. The categorical variable colour shade (hue) was assessed using the kappa statistic. The study results show that tooth colour can be measured independently of the examiner. Good agreement was found between the examiners.

  9. Determination of velocity correction factors for real-time air velocity monitoring in underground mines.

    PubMed

    Zhou, Lihong; Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-12-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer ® . The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed.

  10. Determination of velocity correction factors for real-time air velocity monitoring in underground mines

    PubMed Central

    Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-01-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer®. The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed. PMID:29201495

  11. [Automated analyzer of enzyme immunoassay].

    PubMed

    Osawa, S

    1995-09-01

    Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.

  12. Point discharge current measurements beneath dust devils

    USDA-ARS?s Scientific Manuscript database

    We document for the first time observations of point discharge currents under dust devils using a novel compact sensor deployed in summer 2016 at the USDA-ARS Jornada Experimental Range in New Mexico, USA. A consistent signature is noted in about a dozen events seen over 40 days, with a positive cur...

  13. Compensation for positioning error of industrial robot for flexible vision measuring system

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  14. RADIATION WAVE DETECTOR

    DOEpatents

    Wouters, L.F.

    1958-10-28

    The detection of the shape and amplitude of a radiation wave is discussed, particularly an apparatus for automatically indicating at spaced lntervals of time the radiation intensity at a flxed point as a measure of a radiation wave passing the point. The apparatus utilizes a number of photomultiplier tubes surrounding a scintillation type detector, For obtainlng time spaced signals proportional to radiation at predetermined intervals the photolnultiplier tubes are actuated ln sequence following detector incidence of a predetermined radiation level by electronic means. The time spaced signals so produced are then separately amplified and relayed to recording means.

  15. Chirp-Z analysis for sol-gel transition monitoring.

    PubMed

    Martinez, Loïc; Caplain, Emmanuel; Serfaty, Stéphane; Griesmar, Pascal; Gouedard, Gérard; Gindre, Marcel

    2004-04-01

    Gelation is a complex reaction that transforms a liquid medium into a solid one: the gel. In gel state, some gel materials (DMAP) have the singular property to ring in an audible frequency range when a pulse is applied. Before the gelation point, there is no transmission of slow waves observed; after the gelation point, the speed of sound in the gel rapidly increases from 0.1 to 10 m/s. The time evolution of the speed of sound can be measured, in frequency domain, by following the frequency spacing of the resonance peaks from the Synchronous Detection (SD) measurement method. Unfortunately, due to a constant frequency sampling rate, the relative error for low speeds (0.1 m/s) is 100%. In order to maintain a low constant relative error, in the whole speed time evolution range, Chirp-Z Transform (CZT) is used. This operation transforms a time variant signal to a time invariant one using only a time dependant stretching factor (S). In the frequency domain, the CZT enables us to stretch each collected spectrum from time signals. The blind identification of the S factor gives us the complete time evolution law of the speed of sound. Moreover, this method proves that the frequency bandwidth follows the same time law. These results point out that the minimum wavelength stays constant and that it only depends on the gel.

  16. Novel Optical Technique Developed and Tested for Measuring Two-Point Velocity Correlations in Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Zimmerli, Gregory A.; Goldburg, Walter I.

    2002-01-01

    A novel technique for characterizing turbulent flows was developed and tested at the NASA Glenn Research Center. The work is being done in collaboration with the University of Pittsburgh, through a grant from the NASA Microgravity Fluid Physics Program. The technique we are using, Homodyne Correlation Spectroscopy (HCS), is a laser-light-scattering technique that measures the Doppler frequency shift of light scattered from microscopic particles in the fluid flow. Whereas Laser Doppler Velocimetry gives a local (single-point) measurement of the fluid velocity, the HCS technique measures correlations between fluid velocities at two separate points in the flow at the same instant of time. Velocity correlations in the flow field are of fundamental interest to turbulence researchers and are of practical importance in many engineering applications, such as aeronautics.

  17. An analysis of neural receptive field plasticity by point process adaptive filtering

    PubMed Central

    Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor

    2001-01-01

    Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043

  18. The point of no return: A fundamental limit on the ability to control thought and action.

    PubMed

    Logan, Gordon D

    2015-01-01

    Bartlett (1958. Thinking. New York: Basic Books) described the point of no return as a point of irrevocable commitment to action, which was preceded by a period of gradually increasing commitment. As such, the point of no return reflects a fundamental limit on the ability to control thought and action. I review the literature on the point of no return, taking three perspectives. First, I consider the point of no return from the perspective of the controlled act, as a locus in the architecture and anatomy of the underlying processes. I review experiments from the stop-signal paradigm that suggest that the point of no return is located late in the response system. Then I consider the point of no return from the perspective of the act of control that tries to change the controlled act before it becomes irrevocable. From this perspective, the point of no return is a point in time that provides enough "lead time" for the act of control to take effect. I review experiments that measure the response time to the stop signal as the lead time required for response inhibition in the stop-signal paradigm. Finally, I consider the point of no return in hierarchically controlled tasks, in which there may be many points of no return at different levels of the hierarchy. I review experiments on skilled typing that suggest different points of no return for the commands that determine what is typed and the countermands that inhibit typing, with increasing commitment to action the lower the level in the hierarchy. I end by considering the point of no return in perception and thought as well as action.

  19. Experimental measurement and modeling of snow accumulation and snowmelt in a mountain microcatchment

    NASA Astrophysics Data System (ADS)

    Danko, Michal; Krajčí, Pavel; Hlavčo, Jozef; Kostka, Zdeněk; Holko, Ladislav

    2016-04-01

    Fieldwork is a very useful source of data in all geosciences. This naturally applies also to the snow hydrology. Snow accumulation and snowmelt are spatially very heterogeneous especially in non-forested, mountain environments. Direct field measurements provide the most accurate information about it. Quantification and understanding of processes, that cause these spatial differences are crucial in prediction and modelling of runoff volumes in spring snowmelt period. This study presents possibilities of detailed measurement and modeling of snow cover characteristics in a mountain experimental microcatchment located in northern part of Slovakia in Western Tatra mountains. Catchment area is 0.059 km2 and mean altitude is 1500 m a.s.l. Measurement network consists of 27 snow poles, 3 small snow lysimeters, discharge measurement device and standard automatic weather station. Snow depth and snow water equivalent (SWE) were measured twice a month near the snow poles. These measurements were used to estimate spatial differences in accumulation of SWE. Snowmelt outflow was measured by small snow lysimeters. Measurements were performed in winter 2014/2015. Snow water equivalent variability was very high in such a small area. Differences between particular measuring points reached 600 mm in time of maximum SWE. The results indicated good performance of a snow lysimeter in case of snowmelt timing identification. Increase of snowmelt measured by the snow lysimeter had the same timing as increase in discharge at catchment's outlet and the same timing as the increase in air temperature above the freezing point. Measured data were afterwards used in distributed rainfall-runoff model MIKE-SHE. Several methods were used for spatial distribution of precipitation and snow water equivalent. The model was able to simulate snow water equivalent and snowmelt timing in daily step reasonably well. Simulated discharges were slightly overestimated in later spring.

  20. LINE-1 methylation in plasma DNA as a biomarker of activity of DNA methylation inhibitors in patients with solid tumors.

    PubMed

    Aparicio, Ana; North, Brittany; Barske, Lindsey; Wang, Xuemei; Bollati, Valentina; Weisenberger, Daniel; Yoo, Christine; Tannir, Nizar; Horne, Erin; Groshen, Susan; Jones, Peter; Yang, Allen; Issa, Jean-Pierre

    2009-04-01

    Multiple clinical trials are investigating the use of the DNA methylation inhibitors azacitidine and decitabine for the treatment of solid tumors. Clinical trials in hematological malignancies have shown that optimal activity does not occur at their maximum tolerated doses but selection of an optimal biological dose and schedule for use in solid tumor patients is hampered by the difficulty of obtaining tumor tissue to measure their activity. Here we investigate the feasibility of using plasma DNA to measure the demethylating activity of the DNA methylation inhibitors in patients with solid tumors. We compared four methods to measure LINE-1 and MAGE-A1 promoter methylation in T24 and HCT116 cancer cells treated with decitabine treatment and selected Pyrosequencing for its greater reproducibility and higher signal to noise ratio. We then obtained DNA from plasma, peripheral blood mononuclear cells, buccal mucosa cells and saliva from ten patients with metastatic solid tumors at two different time points, without any intervening treatment. DNA methylation measurements were not significantly different between time point 1 and time point 2 in patient samples. We conclude that measurement of LINE-1 methylation in DNA extracted from the plasma of patients with advanced solid tumors, using Pyrosequencing, is feasible and has low within patient variability. Ongoing studies will determine whether changes in LINE-1 methylation in plasma DNA occur as a result of treatment with DNA methylation inhibitors and parallel changes in tumor tissue DNA.

  1. Longitudinal changes in MRI markers in a reversible unilateral ureteral obstruction mouse model: preliminary experience.

    PubMed

    Haque, Muhammad E; Franklin, Tammy; Bokhary, Ujala; Mathew, Liby; Hack, Bradley K; Chang, Anthony; Puri, Tipu S; Prasad, Pottumarthi V

    2014-04-01

    To evaluate longitudinal changes in renal oxygenation and diffusion measurements in a model of reversible unilateral ureteral obstruction (rUUO) which has been shown to induce chronic renal functional deficits in a strain dependent way. C57BL/6 mice show higher degree of functional deficit compared with BALB/c mice. Because hypoxia and development of fibrosis are associated with chronic kidney diseases and are responsible for progression, we hypothesized that MRI measurements would be able to monitor the longitudinal changes in this model and will show strain dependent differences in response. Here blood oxygenation level dependent (BOLD) and diffusion MRI measurements were performed at three time points over a 30 day period in mice with rUUO. The studies were performed on a 4.7T scanner with the mice anesthetized with isoflurane before UUO, 2 and 28 days postrelease of 6 days of obstruction. We found at the early time point (∼2 days after releasing the obstruction), the relative oxygenation in C57Bl/6 mice were lower compared with BALB/c. Diffusion measurements were lower at this time point and reached statistical significance in BALB/c These methods may prove valuable in better understanding the natural progression of kidney diseases and in evaluating novel interventions to limit progression. Copyright © 2013 Wiley Periodicals, Inc.

  2. Chaos control in delayed phase space constructed by the Takens embedding theory

    NASA Astrophysics Data System (ADS)

    Hajiloo, R.; Salarieh, H.; Alasty, A.

    2018-01-01

    In this paper, the problem of chaos control in discrete-time chaotic systems with unknown governing equations and limited measurable states is investigated. Using the time-series of only one measurable state, an algorithm is proposed to stabilize unstable fixed points. The approach consists of three steps: first, using Takens embedding theory, a delayed phase space preserving the topological characteristics of the unknown system is reconstructed. Second, a dynamic model is identified by recursive least squares method to estimate the time-series data in the delayed phase space. Finally, based on the reconstructed model, an appropriate linear delayed feedback controller is obtained for stabilizing unstable fixed points of the system. Controller gains are computed using a systematic approach. The effectiveness of the proposed algorithm is examined by applying it to the generalized hyperchaotic Henon system, prey-predator population map, and the discrete-time Lorenz system.

  3. Auto covariance computer

    NASA Technical Reports Server (NTRS)

    Hepner, T. E.; Meyers, J. F. (Inventor)

    1985-01-01

    A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.

  4. Measurement of change in health status with Rasch models.

    PubMed

    Anselmi, Pasquale; Vidotto, Giulio; Bettinardi, Ornella; Bertolotti, Giorgio

    2015-02-07

    The traditional approach to the measurement of change presents important drawbacks (no information at individual level, ordinal scores, variance of the measurement instrument across time points), which Rasch models overcome. The article aims to illustrate the features of the measurement of change with Rasch models. To illustrate the measurement of change using Rasch models, the quantitative data of a longitudinal study of heart-surgery patients (N = 98) were used. The scale "Perception of Positive Change" was used as an example of measurement instrument. All patients underwent cardiac rehabilitation, individual psychological intervention, and educational intervention. Nineteen patients also attended progressive muscle relaxation group trainings. The scale was administered before and after the interventions. Three Rasch approaches were used. Two separate analyses were run on the data from the two time points to test the invariance of the instrument. An analysis was run on the stacked data from both time points to measure change in a common frame of reference. Results of the latter analysis were compared with those of an analysis that removed the influence of local dependency on patient measures. Statistics t, χ(2) and F were used for comparing the patient and item measures estimated in the Rasch analyses (a-priori α = .05). Infit, Outfit, R and item Strata were used for investigating Rasch model fit, reliability, and validity of the instrument. Data of all 98 patients were included in the analyses. The instrument was reliable, valid, and substantively unidimensional (Infit, Outfit < 2 for all items, R = .84, item Strata range = 3.93-6.07). Changes in the functioning of the instrument occurred across the two time, which prevented the use of the two separate analyses to unambiguously measure change. Local dependency had a negligible effect on patient measures (p ≥ .8674). Thirteen patients improved, whereas 3 worsened. The patients who attended the relaxation group trainings did not report greater improvement than those who did not (p = .1007). Rasch models represent a valid framework for the measurement of change and a useful complement to traditional approaches.

  5. Blood glucose regulation during living-donor liver transplant surgery.

    PubMed

    Gedik, Ender; İlksen Toprak, Hüseyin; Koca, Erdinç; Şahin, Taylan; Özgül, Ülkü; Ersoy, Mehmet Özcan

    2015-04-01

    The goal of this study was to compare the effects of 2 different regimens on blood glucose levels of living-donor liver transplant. The study participants were randomly allocated to the dextrose in water plus insulin infusion group (group 1, n = 60) or the dextrose in water infusion group (group 2, n = 60) using a sealed envelope technique. Blood glucose levels were measured 3 times during each phase. When the blood glucose level of a patient exceeded the target level, extra insulin was administered via a different intravenous route. The following patient and procedural characteristics were recorded: age, sex, height, weight, body mass index, end-stage liver disease, Model for End-Stage Liver Disease score, total anesthesia time, total surgical time, and number of patients who received an extra bolus of insulin. The following laboratory data were measured pre- and postoperatively: hemoglobin, hematocrit, platelet count, prothrombin time, international normalized ratio, potassium, creatinine, total bilirubin, and albumin. No hypoglycemia was noted. The recipients exhibited statistically significant differences in blood glucose levels during the dissection and neohepatic phases. Blood glucose levels at every time point were significantly different compared with the first dissection time point in group 1. Excluding the first and second anhepatic time points, blood glucose levels were significantly different as compared with the first dissection time point in group 2 (P < .05). We concluded that dextrose with water infusion alone may be more effective and result in safer blood glucose levels as compared with dextrose with water plus insulin infusion for living-donor liver transplant recipients. Exogenous continuous insulin administration may induce hyperglycemic attacks, especially during the neohepatic phase of living-donor liver transplant surgery. Further prospective studies that include homogeneous patient subgroups and diabetic recipients are needed to support the use of dextrose plus water infusion without insulin.

  6. In vivo microcomputed tomography evaluation of rat alveolar bone and root resorption during orthodontic tooth movement.

    PubMed

    Ru, Nan; Liu, Sean Shih-Yao; Zhuang, Li; Li, Song; Bai, Yuxing

    2013-05-01

    To observe the real-time microarchitecture changes of the alveolar bone and root resorption during orthodontic treatment. A 10 g force was delivered to move the maxillary left first molars mesially in twenty 10-week-old rats for 14 days. The first molar and adjacent alveolar bone were scanned using in vivo microcomputed tomography at the following time points: days 0, 3, 7, and 14. Microarchitecture parameters, including bone volume fraction, structure model index, trabecular thickness, trabecular number, and trabecular separation of alveolar bone, were measured on the compression and tension side. The total root volume was measured, and the resorption crater volume at each time point was calculated. Univariate repeated measures analysis of variance with Bonferroni corrections were performed to compare the differences in each parameter between time points with significance level at P < .05. From day 3 to day 7, bone volume fraction, structure model index, trabecular thickness, and trabecular separation decreased significantly on the compression side, but the same parameters increased significantly on the tension side from day 7 to day 14. Root resorption volume of the mesial root increased significantly on day 7 of orthodontic loading. Real-time root and bone resorption during orthodontic movement can be observed in 3 dimensions using in vivo micro-CT. Alveolar bone resorption and root resorption were observed mostly in the apical third on day 7 on the compression side; bone formation was observed on day 14 on the tension side during orthodontic tooth movement.

  7. Exposure to traffic pollution, acute inflammation and autonomic response in a panel of car commuters.

    PubMed

    Sarnat, Jeremy A; Golan, Rachel; Greenwald, Roby; Raysoni, Amit U; Kewada, Priya; Winquist, Andrea; Sarnat, Stefanie E; Dana Flanders, W; Mirabelli, Maria C; Zora, Jennifer E; Bergin, Michael H; Yip, Fuyuen

    2014-08-01

    Exposure to traffic pollution has been linked to numerous adverse health endpoints. Despite this, limited data examining traffic exposures during realistic commutes and acute response exists. We conducted the Atlanta Commuters Exposures (ACE-1) Study, an extensive panel-based exposure and health study, to measure chemically-resolved in-vehicle exposures and corresponding changes in acute oxidative stress, lipid peroxidation, pulmonary and systemic inflammation and autonomic response. We recruited 42 adults (21 with and 21 without asthma) to conduct two 2-h scripted highway commutes during morning rush hour in the metropolitan Atlanta area. A suite of in-vehicle particulate components were measured in the subjects' private vehicles. Biomarker measurements were conducted before, during, and immediately after the commutes and in 3 hourly intervals after commutes. At measurement time points within 3h after the commute, we observed mild to pronounced elevations relative to baseline in exhaled nitric oxide, C-reactive-protein, and exhaled malondialdehyde, indicative of pulmonary and systemic inflammation and oxidative stress initiation, as well as decreases relative to baseline levels in the time-domain heart-rate variability parameters, SDNN and rMSSD, indicative of autonomic dysfunction. We did not observe any detectable changes in lung function measurements (FEV1, FVC), the frequency-domain heart-rate variability parameter or other systemic biomarkers of vascular injury. Water soluble organic carbon was associated with changes in eNO at all post-commute time-points (p<0.0001). Our results point to measureable changes in pulmonary and autonomic biomarkers following a scripted 2-h highway commute. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Exposure to traffic pollution, acute inflammation and autonomic response in a panel of car commuters

    PubMed Central

    Sarnat, Jeremy A.; Golan, Rachel; Greenwald, Roby; Raysoni, Amit U.; Kewada, Priya; Winquist, Andrea; Sarnat, Stefanie E.; Flanders, W. Dana; Mirabelli, Maria C.; Zora, Jennifer E.; Bergin, Michael H.; Yip, Fuyuen

    2015-01-01

    Background Exposure to traffic pollution has been linked to numerous adverse health endpoints. Despite this, limited data examining traffic exposures during realistic commutes and acute response exists. Objectives: We conducted the Atlanta Commuters Exposures (ACE-1) Study, an extensive panel-based exposure and health study, to measure chemically-resolved in-vehicle exposures and corresponding changes in acute oxidative stress, lipid peroxidation, pulmonary and systemic inflammation and autonomic response. Methods We recruited 42 adults (21 with and 21 without asthma) to conduct two 2-h scripted highway commutes during morning rush hour in the metropolitan Atlanta area. A suite of in-vehicle particulate components were measured in the subjects’ private vehicles. Biomarker measurements were conducted before, during, and immediately after the commutes and in 3 hourly intervals after commutes. Results At measurement time points within 3 h after the commute, we observed mild to pronounced elevations relative to baseline in exhaled nitric oxide, C-reactive-protein, and exhaled malondialdehyde, indicative of pulmonary and systemic inflammation and oxidative stress initiation, as well as decreases relative to baseline levels in the time-domain heart-rate variability parameters, SDNN and rMSSD, indicative of autonomic dysfunction. We did not observe any detectable changes in lung function measurements (FEV1, FVC), the frequency-domain heart-rate variability parameter or other systemic biomarkers of vascular injury. Water soluble organic carbon was associated with changes in eNO at all post-commute time-points (p < 0.0001). Conclusions Our results point to measureable changes in pulmonary and autonomic biomarkers following a scripted 2-h highway commute. PMID:24906070

  9. Optical fiber biocompatible sensors for monitoring selective treatment of tumors via thermal ablation

    NASA Astrophysics Data System (ADS)

    Tosi, Daniele; Poeggel, Sven; Dinesh, Duraibabu B.; Macchi, Edoardo G.; Gallati, Mario; Braschi, Giovanni; Leen, Gabriel; Lewis, Elfed

    2015-09-01

    Thermal ablation (TA) is an interventional procedure for selective treatment of tumors, that results in low-invasive outpatient care. The lack of real-time control of TA is one of its main weaknesses. Miniature and biocompatible optical fiber sensors are applied to achieve a dense, multi-parameter monitoring, that can substantially improve the control of TA. Ex vivo measurements are reported performed on porcine liver tissue, to reproduce radiofrequency ablation of hepatocellular carcinoma. Our measurement campaign has a two-fold focus: (1) dual pressure-temperature measurement with a single probe; (2) distributed thermal measurement to estimate point-by-point cells mortality.

  10. Collective efficacy: How is it conceptualized, how is it measured, and does it really matter for understanding perceived neighborhood crime and disorder?

    PubMed Central

    Hipp, John R.

    2016-01-01

    Building on the insights of the self-efficacy literature, this study highlights that collective efficacy is a collective perception that comes from a process. This study emphasizes that 1) there is updating, as there are feedback effects from success or failure by the group to the perception of collective efficacy, and 2) this updating raises the importance of accounting for members' degree of uncertainty regarding neighborhood collective efficacy. Using a sample of 113 block groups in three rural North Carolina counties, this study finds evidence of updating as neighborhoods perceiving more crime or disorder reported less collective efficacy at the next time point. Furthermore, collective efficacy was only associated with lower perceived disorder at the next time point when it occurred in highly cohesive neighborhoods. Finally, neighborhoods with more perceived disorder and uncertainty regarding collective efficacy at one time point had lower levels of collective efficacy at the next time point, illustrating the importance of uncertainty along with updating. PMID:27069285

  11. Proof of Concept: Design and Initial Evaluation of a Device to Measure Gastrointestinal Transit Time.

    PubMed

    Wagner, Robert H; Savir-Baruch, Bital; Halama, James R; Venu, Mukund; Gabriel, Medhat S; Bova, Davide

    2017-09-01

    Chronic constipation and gastrointestinal motility disorders constitute a large part of a gastroenterology practice and have a significant impact on a patient's quality of life and lifestyle. In most cases, medications are prescribed to alleviate symptoms without there being an objective measurement of response. Commonly used investigations of gastrointestinal transit times are currently limited to radiopaque markers or electronic capsules. Repeated use of these techniques is limited because of the radiation exposure and the significant cost of the devices. We present the proof of concept for a new device to measure gastrointestinal transit time using commonly available and inexpensive materials with only a small amount of radiotracer. Methods: We assembled gelatin capsules containing a 67 Ga-citrate-radiolabeled grain of rice embedded in paraffin for use as a point-source transit device. It was tested for stability in vitro and subsequently was given orally to 4 healthy volunteers and 10 patients with constipation or diarrhea. Imaging was performed at regular intervals until the device was excreted. Results: The device remained intact and visible as a point source in all subjects until excretion. When used along with a diary of bowel movement times and dates, the device could determine the total transit time. The device could be visualized either alone or in combination with a barium small-bowel follow-through study or a gastric emptying study. Conclusion: The use of a point-source transit device for the determination of gastrointestinal transit time is a feasible alternative to other methods. The device is inexpensive and easy to assemble, requires only a small amount of radiotracer, and remains inert throughout the gastrointestinal tract, allowing for accurate determination of gastrointestinal transit time. Further investigation of the device is required to establish optimum imaging parameters and reference values. Measurements of gastrointestinal transit time may be useful in managing patients with dysmotility and in selecting the appropriate pharmaceutical treatment. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  12. Graz kHz SLR LIDAR: first results

    NASA Astrophysics Data System (ADS)

    Kirchner, Georg; Koidl, Franz; Kucharski, Daniel; Pachler, Walther; Seiss, Matthias; Leitgeb, Erich

    2009-05-01

    The Satellite Laser Ranging (SLR) Station Graz is measuring routinely distances to satellites with a 2 kHz laser, achieving an accuracy of 2-3 mm. Using this available equipment, we developed - and added as a byproduct - a kHz SLR LIDAR for the Graz station: Photons of each transmitted laser pulse are backscattered from clouds, atmospheric layers, aircraft vapor trails etc. An additional 10 cm diameter telescope - installed on our main telescope mount - and a Single- Photon Counting Module (SPCM) detect these photons. Using an ISA-Bus based FPGA card - developed in Graz for the kHz SLR operation - these detection times are stored with 100 ns resolution (15 m slots in distance). Event times of any number of laser shots can be accumulated in up to 4096 counters (according to > 60 km distance). The LIDAR distances are stored together with epoch time and telescope pointing information; any reflection point is therefore determined with 3D coordinates, with 15 m resolution in distance, and with the angular precision of the laser telescope pointing. First test results to clouds in full daylight conditions - accumulating up to several 100 laser shots per measurement - yielded high LIDAR data rates (> 100 points per second) and excellent detection of clouds (up to 10 km distance at the moment). Our ultimate goal is to operate the LIDAR automatically and in parallel with the standard SLR measurements, during day and night, collecting LIDAR data as a byproduct, and without any additional expenses.

  13. Estimation of Drug Effectiveness by Modeling Three Time-dependent Covariates: An Application to Data on Cardioprotective Medications in the Chronic Dialysis Population

    PubMed Central

    Phadnis, Milind A.; Shireman, Theresa I.; Wetmore, James B.; Rigler, Sally K.; Zhou, Xinhua; Spertus, John A.; Ellerbeck, Edward F.; Mahnken, Jonathan D.

    2014-01-01

    In a population of chronic dialysis patients with an extensive burden of cardiovascular disease, estimation of the effectiveness of cardioprotective medication in literature is based on calculation of a hazard ratio comparing hazard of mortality for two groups (with or without drug exposure) measured at a single point in time or through the cumulative metric of proportion of days covered (PDC) on medication. Though both approaches can be modeled in a time-dependent manner using a Cox regression model, we propose a more complete time-dependent metric for evaluating cardioprotective medication efficacy. We consider that drug effectiveness is potentially the result of interactions between three time-dependent covariate measures, current drug usage status (ON versus OFF), proportion of cumulative exposure to drug at a given point in time, and the patient’s switching behavior between taking and not taking the medication. We show that modeling of all three of these time-dependent measures illustrates more clearly how varying patterns of drug exposure affect drug effectiveness, which could remain obscured when modeled by the more standard single time-dependent covariate approaches. We propose that understanding the nature and directionality of these interactions will help the biopharmaceutical industry in better estimating drug efficacy. PMID:25343005

  14. Estimation of Drug Effectiveness by Modeling Three Time-dependent Covariates: An Application to Data on Cardioprotective Medications in the Chronic Dialysis Population.

    PubMed

    Phadnis, Milind A; Shireman, Theresa I; Wetmore, James B; Rigler, Sally K; Zhou, Xinhua; Spertus, John A; Ellerbeck, Edward F; Mahnken, Jonathan D

    2014-01-01

    In a population of chronic dialysis patients with an extensive burden of cardiovascular disease, estimation of the effectiveness of cardioprotective medication in literature is based on calculation of a hazard ratio comparing hazard of mortality for two groups (with or without drug exposure) measured at a single point in time or through the cumulative metric of proportion of days covered (PDC) on medication. Though both approaches can be modeled in a time-dependent manner using a Cox regression model, we propose a more complete time-dependent metric for evaluating cardioprotective medication efficacy. We consider that drug effectiveness is potentially the result of interactions between three time-dependent covariate measures, current drug usage status (ON versus OFF), proportion of cumulative exposure to drug at a given point in time, and the patient's switching behavior between taking and not taking the medication. We show that modeling of all three of these time-dependent measures illustrates more clearly how varying patterns of drug exposure affect drug effectiveness, which could remain obscured when modeled by the more standard single time-dependent covariate approaches. We propose that understanding the nature and directionality of these interactions will help the biopharmaceutical industry in better estimating drug efficacy.

  15. 2D first break tomographic processing of data measured for celebration profiles: CEL01, CEL04, CEL05, CEL06, CEL09, CEL11

    NASA Astrophysics Data System (ADS)

    Bielik, M.; Vozar, J.; Hegedus, E.; Celebration Working Group

    2003-04-01

    The contribution informs about the preliminary results that relate to the first arrival p-wave seismic tomographic processing of data measured along the profiles CEL01, CEL04, CEL05, CEL06, CEL09 and CEL11. These profiles were measured in a framework of the seismic project called CELEBRATION 2000. Data acquisition and geometric parameters of the processed profiles, tomographic processing’s principle, particular processing steps and program parameters are described. Characteristic data (shot points, geophone points, total length of profiles, for all profiles, sampling, sensors and record lengths) of observation profiles are given. The fast program package developed by C. Zelt was applied for tomographic velocity inversion. This process consists of several steps. First step is a creation of the starting velocity field for which the calculated arrival times are modelled by the method of finite differences. The next step is minimization of differences between the measured and modelled arrival time till the deviation is small. Elimination of equivalency problem by including a priori information in the starting velocity field was done too. A priori information consists of the depth to the pre-Tertiary basement, estimation of its overlying sedimentary velocity from well-logging and or other seismic velocity data, etc. After checking the reciprocal times, pickings were corrected. The final result of the processing is a reliable travel time curve set considering the reciprocal times. We carried out picking of travel time curves, enhancement of signal-to-noise ratio on the seismograms using the program system of PROMAX. Tomographic inversion was carried out by so called 3D/2D procedure taking into account 3D wave propagation. It means that a corridor along the profile, which contains the outlying shot points and geophone points as well was defined and we carried out 3D processing within this corridor. The preliminary results indicate the seismic anomalous zones within the crust and the uppermost part of the upper mantle in the area consists of the Western Carpathians, the North European platform, the Pannonian basin and the Bohemian Massif.

  16. Relationships between Participants' International Prostate Symptom Score and BPH Impact Index Changes and Global Ratings of Change in a Trial of Phytotherapy for Men with Lower Urinary Tract Symptoms

    PubMed Central

    Barry, Michael J.; Cantor, Alan; Roehrborn, Claus G.

    2014-01-01

    Purpose To relate changes in AUA Symptom Index (AUASI) scores with bother measures and global ratings of change among men with lower urinary tract symptoms enrolled in a trial of saw palmetto. Materials and Methods To be eligible, men were ≥45 years old, had ajpeak uroflow ≥4 ml/sec, and an AUASI score ≥ 8 and ≤ 24. Participants self-administered the AUASI, IPSS quality of life item (IPSS QoL), BPH Impact Index (BII) and two global change questions at baseline and 24, 48, and 72 weeks. Results Among 357 participants, global ratings of “a little better” were associated with mean decreases in AUASI scores from 2.8 to 4.1 points, across three time points. The analogous range for mean decreases in BII scores was 1.0 to 1.7 points, and for the IPSS QoL item 0.5 to 0.8 points. At 72 weeks, for the first global change question, each change measure could discriminate between participants rating themselves at least a little better versus unchanged or worse 70-72% of the time. A multivariable model increased discrimination to 77%. For the second global change question, each change measure correctly discriminated ratings of at least a little better versus unchanged or worse 69-74% of the time, and a multivariable model increased discrimination to 79%. Conclusions Changes in AUASI scores could discriminate between participants rating themselves at least a little better versus unchanged or worse. Our findings support the practice of powering studies to detect group mean differences in AUASI scores of at least 3 points. PMID:23017510

  17. A New Satellite System for Measuring BRDF from Space

    NASA Technical Reports Server (NTRS)

    Wiscombe, W.; Kaufman, Y.; Herman, J.

    1999-01-01

    Formation flying of satellites is at the beginning of an explosive growth curve. Spacecraft buses are shrinking to the point where we will soon be able to launch 10 micro-satellites or 100 nano-satellites on a single launch vehicle. Simultaneously, spectrometers are just beginning to be flown in space by both the U.S. and Europe. On-board programmable band aggregation will soon allow exactly the spectral bands desired to be returned to Earth. Further efforts are being devoted to radically shrink spectrometers both in size and weight. And GPS positioning and attitude determination, plus new technologies for attitude control, will allow fleets of satellites to all point at the same Earth target. All these advances, in combination, make possible for the first time the proper measurement of Bidirectional Reflectance Distribution (BRDF) form space. Previously space BDRF's were mere composites, built up over time by viewing different types of scenes at different times, then creating catalogs of BDRF functions whose use relied upon correct "scene identification" --the weak link. Formation-flying micro-satellites, carrying programmable spectrometers and precision-pointing at the same Earth target, can measure the full BDRF simultaneously, in real time. This talk will review these technological advances and discuss an actual proposed concept, based on these advances, to measure Earth-target BDRF's (clouds as well as surface) across the full solar spectrum in the 2010 timeframe. This concept is part of a larger concept called Leonardo for properly measuring the radiative forcing of Earth for climate purposes; lack of knowing of BDRF and of diurnal cycle are at present the two limiting factors preventing improved estimates of this forcing.

  18. A similarity hypothesis for the two-point correlation tensor in a temporally evolving plane wake

    NASA Technical Reports Server (NTRS)

    Ewing, D. W.; George, W. K.; Moser, R. D.; Rogers, M. M.

    1995-01-01

    The analysis demonstrated that the governing equations for the two-point velocity correlation tensor in the temporally evolving wake admit similarity solutions, which include the similarity solutions for the single-point moment as a special case. The resulting equations for the similarity solutions include two constants, beta and Re(sub sigma), that are ratios of three characteristic time scales of processes in the flow: a viscous time scale, a time scale characteristic of the spread rate of the flow, and a characteristic time scale of the mean strain rate. The values of these ratios depend on the initial conditions of the flow and are most likely measures of the coherent structures in the initial conditions. The occurrences of these constants in the governing equations for the similarity solutions indicates that these solutions, in general, will only be the same for two flows if these two constants are equal (and hence the coherent structures in the flows are related). The comparisons between the predictions of the similarity hypothesis and the data presented here and elsewhere indicate that the similarity solutions for the two-point correlation tensors provide a good approximation of the measures of those motions that are not significantly affected by the boundary conditions caused by the finite extent of real flows. Thus, the two-point similarity hypothesis provides a useful tool for both numerical and physical experimentalist that can be used to examine how the finite extent of real flows affect the evolution of the different scales of motion in the flow.

  19. Measurement of fat fraction in the human thymus by localized NMR and three-point Dixon MRI techniques.

    PubMed

    Fishbein, Kenneth W; Makrogiannis, Sokratis K; Lukas, Vanessa A; Okine, Marilyn; Ramachandran, Ramona; Ferrucci, Luigi; Egan, Josephine M; Chia, Chee W; Spencer, Richard G

    2018-07-01

    To develop a protocol to non-invasively measure and map fat fraction, fat/(fat+water), as a function of age in the adult thymus for future studies monitoring the effects of interventions aimed at promoting thymic rejuvenation and preservation of immunity in older adults. Three-dimensional spoiled gradient echo 3T MRI with 3-point Dixon fat-water separation was performed at full inspiration for thymus conspicuity in 36 volunteers 19 to 56 years old. Reproducible breath-holding was facilitated by real-time pressure recording external to the console. The MRI method was validated against localized spectroscopy in vivo, with ECG triggering to compensate for stretching during the cardiac cycle. Fat fractions were corrected for T 1 and T 2 bias using relaxation times measured using inversion recovery-prepared PRESS with incremented echo time. In thymus at 3 T, T 1water  = 978 ± 75 ms, T 1fat  = 323 ± 37 ms, T 2water  = 43.4 ± 9.7 ms and T 2fat  = 52.1 ± 7.6 ms were measured. Mean T 1 -corrected MRI fat fractions varied from 0.2 to 0.8 and were positively correlated with age, weight and body mass index (BMI). In subjects with matching MRI and MRS fat fraction measurements, the difference between these measurements exhibited a mean of -0.008 with a 95% confidence interval of (0.123, -0.138). 3-point Dixon MRI of the thymus with T 1 bias correction produces quantitative fat fraction maps that correlate with T 2 -corrected MRS measurements and show age trends consistent with thymic involution. Published by Elsevier Inc.

  20. Mild cognitive impairment: baseline and longitudinal structural MR imaging measures improve predictive prognosis.

    PubMed

    McEvoy, Linda K; Holland, Dominic; Hagler, Donald J; Fennema-Notestine, Christine; Brewer, James B; Dale, Anders M

    2011-06-01

    To assess whether single-time-point and longitudinal volumetric magnetic resonance (MR) imaging measures provide predictive prognostic information in patients with amnestic mild cognitive impairment (MCI). This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Written informed consent was obtained from all participants or the participants' legal guardians. Cross-validated discriminant analyses of MR imaging measures were performed to differentiate 164 Alzheimer disease (AD) cases from 203 healthy control cases. Separate analyses were performed by using data from MR images obtained at one time point or by combining single-time-point measures with 1-year change measures. Resulting discriminant functions were applied to 317 MCI cases to derive individual patient risk scores. Risk of conversion to AD was estimated as a continuous function of risk score percentile. Kaplan-Meier survival curves were computed for risk score quartiles. Odds ratios (ORs) for the conversion to AD were computed between the highest and lowest quartile scores. Individualized risk estimates from baseline MR examinations indicated that the 1-year risk of conversion to AD ranged from 3% to 40% (average group risk, 17%; OR, 7.2 for highest vs lowest score quartiles). Including measures of 1-year change in global and regional volumes significantly improved risk estimates (P = 001), with the risk of conversion to AD in the subsequent year ranging from 3% to 69% (average group risk, 27%; OR, 12.0 for highest vs lowest score quartiles). Relative to the risk of conversion to AD conferred by the clinical diagnosis of MCI alone, MR imaging measures yield substantially more informative patient-specific risk estimates. Such predictive prognostic information will be critical if disease-modifying therapies become available. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101975/-/DC1. RSNA, 2011

  1. 40 CFR 1054.145 - Are there interim provisions that apply only for a limited time?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... scheduled emission-related maintenance falls within 10 hours of a test point, delay the maintenance until the engine reaches the test point. Measure emissions before and after peforming the maintenance. Use... example, for the fuel line permeation standards starting in 2012, equipment manufacturers may order a...

  2. A repeated-measures study of recreational water exposure, non-point source pollution, and risk of illness

    EPA Science Inventory

    Discharge of stormwater runoff onto beaches is a major cause of beach closings and advisories in the United States. Prospective studies of recreational water quality and health have often been limited to two time points (baseline and follow-up). Little is known about the risk of ...

  3. Estimation of point source fugitive emission rates from a single sensor time series: a conditionally-sampled Gaussian plume reconstruction

    EPA Science Inventory

    This paper presents a technique for determining the trace gas emission rate from a point source. The technique was tested using data from controlled methane release experiments and from measurement downwind of a natural gas production facility in Wyoming. Concentration measuremen...

  4. Vapor Pressure of GB

    DTIC Science & Technology

    2009-04-01

    equation. The Podoll and Parish low temperature measured vapor pressure data (-35 and -25 °C) were included in our analysis . Penski summarized the...existing literature data for GB in his 1994 data review and analysis .6 He did not include the 0 °C Podoll and Parish measured vapor pressure data point...35.9 Pa) in his analysis because the error associated with this point was Ŗ to 10 times greater than the other values". He did not include the -10 °C

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hongfen, E-mail: wanghongfen11@163.com; Wang, Zhiqi; Chen, Shougang

    Molybdenum carbides with surfactants as carbon sources were prepared using the carbothermal reduction of the appropriate precursors (molybdenum oxides deposited on surfactant micelles) at 1023 K under hydrogen gas. The carburized products were characterized using scanning electron microscopy (SEM), X-ray diffraction and BET surface area measurements. From the SEM images, hollow microspherical and rod-like molybdenum carbides were observed. X-ray diffraction patterns showed that the annealing time of carburization had a large effect on the conversion of molybdenum oxides to molybdenum carbides. And BET surface area measurements indicated that the difference of carbon sources brought a big difference in specific surfacemore » areas of molybdenum carbides. - Graphical abstract: Molybdenum carbides having hollow microspherical and hollow rod-like morphologies that are different from the conventional monodipersed platelet-like morphologies. Highlights: Black-Right-Pointing-Pointer Molybdenum carbides were prepared using surfactants as carbon sources. Black-Right-Pointing-Pointer The kinds of surfactants affected the morphologies of molybdenum carbides. Black-Right-Pointing-Pointer The time of heat preservation at 1023 K affected the carburization process. Black-Right-Pointing-Pointer Molybdenum carbides with hollow structures had larger specific surface areas.« less

  6. Monitoring small pioneer trees in the forest-tundra ecotone: using multi-temporal airborne laser scanning data to model height growth.

    PubMed

    Hauglin, Marius; Bollandsås, Ole Martin; Gobakken, Terje; Næsset, Erik

    2017-12-08

    Monitoring of forest resources through national forest inventory programmes is carried out in many countries. The expected climate changes will affect trees and forests and might cause an expansion of trees into presently treeless areas, such as above the current alpine tree line. It is therefore a need to develop methods that enable the inclusion of also these areas into monitoring programmes. Airborne laser scanning (ALS) is an established tool in operational forest inventories, and could be a viable option for monitoring tasks. In the present study, we used multi-temporal ALS data with point density of 8-15 points per m 2 , together with field measurements from single trees in the forest-tundra ecotone along a 1500-km-long transect in Norway. The material comprised 262 small trees with an average height of 1.78 m. The field-measured height growth was derived from height measurements at two points in time. The elapsed time between the two measurements was 4 years. Regression models were then used to model the relationship between ALS-derived variables and tree heights as well as the height growth. Strong relationships between ALS-derived variables and tree heights were found, with R 2 values of 0.93 and 0.97 for the two points in time. The relationship between the ALS data and the field-derived height growth was weaker, with R 2 values of 0.36-0.42. A cross-validation gave corresponding results, with root mean square errors of 19 and 11% for the ALS height models and 60% for the model relating ALS data to single-tree height growth.

  7. A novel method of measuring the melting point of animal fats.

    PubMed

    Lloyd, S S; Dawkins, S T; Dawkins, R L

    2014-10-01

    The melting point (TM) of fat is relevant to health, but available methods of determining TM are cumbersome. One of the standard methods of measuring TM for animal and vegetable fats is the slip point, also known as the open capillary method. This method is imprecise and not amenable to automation or mass testing. We have developed a technique for measuring TM of animal fat using the Rotor-Gene Q (Qiagen, Hilden, Germany). The assay has an intra-assay SD of 0.08°C. A single operator can extract and assay up to 250 samples of animal fat in 24 h, including the time to extract the fat from the adipose tissue. This technique will improve the quality of research into genetic and environmental contributions to fat composition of meat.

  8. A Framework and Algorithms for Multivariate Time Series Analytics (MTSA): Learning, Monitoring, and Recommendation

    ERIC Educational Resources Information Center

    Ngan, Chun-Kit

    2013-01-01

    Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…

  9. A method for analyzing temporal patterns of variability of a time series from Poincare plots.

    PubMed

    Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E

    2012-07-01

    The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.

  10. Development of a Data Reduction Algorithm for Optical Wide Field Patrol (OWL) II: Improving Measurement of Lengths of Detected Streaks

    NASA Astrophysics Data System (ADS)

    Park, Sun-Youp; Choi, Jin; Roh, Dong-Goo; Park, Maru; Jo, Jung Hyun; Yim, Hong-Suh; Park, Young-Sik; Bae, Young-Ho; Park, Jang-Hyun; Moon, Hong-Kyu; Choi, Young-Jun; Cho, Sungki; Choi, Eun-Jung

    2016-09-01

    As described in the previous paper (Park et al. 2013), the detector subsystem of optical wide-field patrol (OWL) provides many observational data points of a single artificial satellite or space debris in the form of small streaks, using a chopper system and a time tagger. The position and the corresponding time data are matched assuming that the length of a streak on the CCD frame is proportional to the time duration of the exposure during which the chopper blades do not obscure the CCD window. In the previous study, however, the length was measured using the diagonal of the rectangle of the image area containing the streak; the results were quite ambiguous and inaccurate, allowing possible matching error of positions and time data. Furthermore, because only one (position, time) data point is created from one streak, the efficiency of the observation decreases. To define the length of a streak correctly, it is important to locate the endpoints of a streak. In this paper, a method using a differential convolution mask pattern is tested. This method can be used to obtain the positions where the pixel values are changed sharply. These endpoints can be regarded as directly detected positional data, and the number of data points is doubled by this result.

  11. The experience of traumatic events disrupts the measurement invariance of a posttraumatic stress scale.

    PubMed

    Lommen, Miriam J J; van de Schoot, Rens; Engelhard, Iris M

    2014-01-01

    Studies that include multiple assessments of a particular instrument within the same population are based on the presumption that this instrument measures the same construct over time. But what if the meaning of the construct changes over time due to one's experiences? For example, the experience of a traumatic event can influence one's view of the world, others, and self, and may disrupt the stability of a questionnaire measuring posttraumatic stress symptoms (i.e., it may affect the interpretation of items). Nevertheless, assessments before and after such a traumatic event are crucial to study longitudinal development of posttraumatic stress symptoms. In this study, we examined measurement invariance of posttraumatic stress symptoms in a sample of Dutch soldiers before and after they went on deployment to Afghanistan (N = 249). Results showed that the underlying measurement model before deployment was different from the measurement model after deployment due to invariant item thresholds. These results were replicated in a sample of soldiers deployed to Iraq (N = 305). Since the lack of measurement invariance was due to instability of the majority of the items, it seems reasonable to conclude that the underlying construct of PSS is unstable over time if war-zone related traumatic events occur in between measurements. From a statistical point of view, the scores over time cannot be compared when there is a lack of measurement invariance. The main message of this paper is that researchers working with posttraumatic stress questionnaires in longitudinal studies should not take measurement invariance for granted, but should use pre- and post-symptom scores as different constructs for each time point in the analysis.

  12. New Observations of Subarcsecond Photospheric Bright Points

    NASA Technical Reports Server (NTRS)

    Berger, T. E.; Schrijver, C. J.; Shine, R. A.; Tarbell, T. D.; Title, A. M.; Scharmer, G.

    1995-01-01

    We have used an interference filter centered at 4305 A within the bandhead of the CH radical (the 'G band') and real-time image selection at the Swedish Vacuum Solar Telescope on La Palma to produce very high contrast images of subarcsecond photospheric bright points at all locations on the solar disk. During the 6 day period of 15-20 Sept. 1993 we observed active region NOAA 7581 from its appearance on the East limb to a near-disk-center position on 20 Sept. A total of 1804 bright points were selected for analysis from the disk center image using feature extraction image processing techniques. The measured FWHM distribution of the bright points in the image is lognormal with a modal value of 220 km (0.30 sec) and an average value of 250 km (0.35 sec). The smallest measured bright point diameter is 120 km (0.17 sec) and the largest is 600 km (O.69 sec). Approximately 60% of the measured bright points are circular (eccentricity approx. 1.0), the average eccentricity is 1.5, and the maximum eccentricity corresponding to filigree in the image is 6.5. The peak contrast of the measured bright points is normally distributed. The contrast distribution variance is much greater than the measurement accuracy, indicating a large spread in intrinsic bright-point contrast. When referenced to an averaged 'quiet-Sun' area in the image, the modal contrast is 29% and the maximum value is 75%; when referenced to an average intergranular lane brightness in the image, the distribution has a modal value of 61% and a maximum of 119%. The bin-averaged contrast of G-band bright points is constant across the entire measured size range. The measured area of the bright points, corrected for pixelation and selection effects, covers about 1.8% of the total image area. Large pores and micropores occupy an additional 2% of the image area, implying a total area fraction of magnetic proxy features in the image of 3.8%. We discuss the implications of this area fraction measurement in the context of previously published measurements which show that typical active region plage has a magnetic filling factor on the order of 10% or greater. The results suggest that in the active region analyzed here, less than 50% of the small-scale magnetic flux tubes are demarcated by visible proxies such as bright points or pores.

  13. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  14. 40 CFR 797.1330 - Daphnid chronic toxicity test.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... an extended period of time. In this test guideline, mortality and reproduction (and optionally... exposure over a specified period of time. In this guideline, the effect measured is immobilization. (4..., wet weight) to the volume (liters) of test solution in a test chamber at a point in time or passing...

  15. 40 CFR 797.1330 - Daphnid chronic toxicity test.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... an extended period of time. In this test guideline, mortality and reproduction (and optionally... exposure over a specified period of time. In this guideline, the effect measured is immobilization. (4..., wet weight) to the volume (liters) of test solution in a test chamber at a point in time or passing...

  16. 40 CFR 797.1330 - Daphnid chronic toxicity test.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... an extended period of time. In this test guideline, mortality and reproduction (and optionally... exposure over a specified period of time. In this guideline, the effect measured is immobilization. (4..., wet weight) to the volume (liters) of test solution in a test chamber at a point in time or passing...

  17. 40 CFR 797.1330 - Daphnid chronic toxicity test.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... an extended period of time. In this test guideline, mortality and reproduction (and optionally... exposure over a specified period of time. In this guideline, the effect measured is immobilization. (4..., wet weight) to the volume (liters) of test solution in a test chamber at a point in time or passing...

  18. 40 CFR 797.1330 - Daphnid chronic toxicity test.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... an extended period of time. In this test guideline, mortality and reproduction (and optionally... exposure over a specified period of time. In this guideline, the effect measured is immobilization. (4..., wet weight) to the volume (liters) of test solution in a test chamber at a point in time or passing...

  19. Deconvolution of mixing time series on a graph

    PubMed Central

    Blocker, Alexander W.; Airoldi, Edoardo M.

    2013-01-01

    In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135

  20. Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2013-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  1. Turbulent Statistics from Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2012-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  2. The real time multi point Thomson scattering diagnostic at NSTX-U

    NASA Astrophysics Data System (ADS)

    Laggner, Florian; Kolemen, Egemen; Diallo, Ahmed; Leblanc, Benoit; Rozenblat, Roman; Tchilinguirian, Greg; NSTX-U Team Team

    2017-10-01

    This contribution presents the upgrade of the multi point Thomson scattering (MPTS) diagnostic for real time application. As a key diagnostic at NSTX-U, the MPTS diagnostic simultaneously measures the electron density (ne) and electron temperature (Te) profiles of a plasma discharge. Therefore, this powerful diagnostic can directly access the electron pressure of the plasma. Currently, only post-discharge evaluation of the data is available, however, since the plasma pressure is one important drive for instabilities, real time measurements of this quantities would be beneficial for plasma control. In a first step, ten MPTS channels were equipped with real time electronics, which improve the data acquisition rate by five orders of magnitude. The commissioning of the system is ongoing and first benchmarks of the real time evaluation routines against the standard, post-discharge evaluation show promising results: The Te as well as ne profiles of both types of analyses agree within their uncertainties. This work was supported by the US Department of Energy under DE-SC0015878 and DE-SC0015480.

  3. The relevance of time series in molecular ecology and conservation biology.

    PubMed

    Habel, Jan C; Husemann, Martin; Finger, Aline; Danley, Patrick D; Zachos, Frank E

    2014-05-01

    The genetic structure of a species is shaped by the interaction of contemporary and historical factors. Analyses of individuals from the same population sampled at different points in time can help to disentangle the effects of current and historical forces and facilitate the understanding of the forces driving the differentiation of populations. The use of such time series allows for the exploration of changes at the population and intraspecific levels over time. Material from museum collections plays a key role in understanding and evaluating observed population structures, especially if large numbers of individuals have been sampled from the same locations at multiple time points. In these cases, changes in population structure can be assessed empirically. The development of new molecular markers relying on short DNA fragments (such as microsatellites or single nucleotide polymorphisms) allows for the analysis of long-preserved and partially degraded samples. Recently developed techniques to construct genome libraries with a reduced complexity and next generation sequencing and their associated analysis pipelines have the potential to facilitate marker development and genotyping in non-model species. In this review, we discuss the problems with sampling and available marker systems for historical specimens and demonstrate that temporal comparative studies are crucial for the estimation of important population genetic parameters and to measure empirically the effects of recent habitat alteration. While many of these analyses can be performed with samples taken at a single point in time, the measurements are more robust if multiple points in time are studied. Furthermore, examining the effects of habitat alteration, population declines, and population bottlenecks is only possible if samples before and after the respective events are included. © 2013 The Authors. Biological Reviews © 2013 Cambridge Philosophical Society.

  4. Structure of Soot-Containing Laminar Jet Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Mortazavi, S.; Sunderland, P. B.; Jurng, J.; Koylu, U. O.; Faeth, G. M.

    1993-01-01

    The structure and soot properties of nonbuoyant and weakly-buoyant round jet diffusion flames were studied, considering ethylene, propane and acetylene burning in air at pressures of 0.125-2.0 atm. Measurements of flame structure included radiative heat loss fractions, flame shape and temperature distributions in the fuel-lean (overfire) region. These measurements were used to evaluate flame structure predictions based on the conserved-scalar formalism in conjunction with the laminar flamelet concept, finding good agreement betweem predictions and measurements. Soot property measurements included laminar smoke points, soot volume function distributions using laser extinction, and soot structure using thermophoretic sampling and analysis by transmission electron microscopy. Nonbuoyant flames were found to exhibit laminar smoke points like buoyant flames but their properties are very different; in particular, nonbuoyant flames have laminar smoke point flame lengths and residence times that are shorter and longer, respectively, than buoyant flames.

  5. Assessment of Surgical Learning Curves in Transoral Robotic Surgery for Squamous Cell Carcinoma of the Oropharynx

    PubMed Central

    Albergotti, William G.; Gooding, William E.; Kubik, Mark W.; Geltzeiler, Mathew; Kim, Seungwon; Duvvuri, Umamaheswar; Ferris, Robert L.

    2017-01-01

    IMPORTANCE Transoral robotic surgery (TORS) is increasingly employed as a treatment option for squamous cell carcinoma of the oropharynx (OPSCC). Measures of surgical learning curves are needed particularly as clinical trials using this technology continue to evolve. OBJECTIVE To assess learning curves for the oncologic TORS surgeon and to identify the number of cases needed to identify the learning phase. DESIGN, SETTING, AND PARTICIPANTS A retrospective review of all patients who underwent TORS for OPSCC at the University of Pittsburgh Medical Center between March 2010 and March 2016. Cases were excluded for involvement of a subsite outside of the oropharynx, for nonmalignant abnormality or nonsquamous histology, unknown primary, no tumor in the main specimen, free flap reconstruction, and for an inability to define margin status. EXPOSURES Transoral robotic surgery for OPSCC. MAIN OUTCOMES AND MEASURES Primary learning measures defined by the authors include the initial and final margin status and time to resection of main surgical specimen. A cumulative sum learning curve was developed for each surgeon for each of the study variables. The inflection point of each surgeon’s curve was considered to be the point signaling the completion of the learning phase. RESULTS There were 382 transoral robotic procedures identified. Of 382 cases, 160 met our inclusion criteria: 68 for surgeon A, 37 for surgeon B, and 55 for surgeon C. Of the 160 included patients, 125 were men and 35 were women. The mean (SD) age of participants was 59.4 (9.5) years. Mean (SD) time to resection including robot set-up was 79 (36) minutes. The inflection points for the final margin status learning curves were 27 cases (surgeon A) and 25 cases (surgeon C). There was no inflection point for surgeon B for final margin status. Inflection points for mean time to resection were: 39 cases (surgeon A), 30 cases (surgeon B), and 27 cases (surgeon C). CONCLUSIONS AND RELEVANCE Using metrics of positive margin rate and time to resection of the main surgical specimen, the learning curve for TORS for OPSCC is surgeon-specific. Inflection points for most learning curves peak between 20 and 30 cases. PMID:28196200

  6. Wall shear stress fixed points in cardiovascular fluid mechanics.

    PubMed

    Arzani, Amirhossein; Shadden, Shawn C

    2018-05-17

    Complex blood flow in large arteries creates rich wall shear stress (WSS) vectorial features. WSS acts as a link between blood flow dynamics and the biology of various cardiovascular diseases. WSS has been of great interest in a wide range of studies and has been the most popular measure to correlate blood flow to cardiovascular disease. Recent studies have emphasized different vectorial features of WSS. However, fixed points in the WSS vector field have not received much attention. A WSS fixed point is a point on the vessel wall where the WSS vector vanishes. In this article, WSS fixed points are classified and the aspects by which they could influence cardiovascular disease are reviewed. First, the connection between WSS fixed points and the flow topology away from the vessel wall is discussed. Second, the potential role of time-averaged WSS fixed points in biochemical mass transport is demonstrated using the recent concept of Lagrangian WSS structures. Finally, simple measures are proposed to quantify the exposure of the endothelial cells to WSS fixed points. Examples from various arterial flow applications are demonstrated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Concept for an off-line gain stabilisation method.

    PubMed

    Pommé, S; Sibbens, G

    2004-01-01

    Conceptual ideas are presented for an off-line gain stabilisation method for spectrometry, in particular for alpha-particle spectrometry at low count rate. The method involves list mode storage of individual energy and time stamp data pairs. The 'Stieltjes integral' of measured spectra with respect to a reference spectrum is proposed as an indicator for gain instability. 'Exponentially moving averages' of the latter show the gain shift as a function of time. With this information, the data are relocated stochastically on a point-by-point basis.

  8. Pump-Probe Spectroscopy Using the Hadamard Transform.

    PubMed

    Beddard, Godfrey S; Yorke, Briony A

    2016-08-01

    A new method of performing pump-probe experiments is proposed and experimentally demonstrated by a proof of concept on the millisecond scale. The idea behind this method is to measure the total probe intensity arising from several time points as a group, instead of measuring each time separately. These measurements are multiplexes that are then transformed into the true signal via multiplication with a binary Hadamard S matrix. Each group of probe pulses is determined by using the pattern of a row of the Hadamard S matrix and the experiment is completed by rotating this pattern by one step for each sample excitation until the original pattern is again produced. Thus to measure n time points, n excitation events are needed and n probe patterns each taken from the n × n S matrix. The time resolution is determined by the shortest time between the probe pulses. In principle, this method could be used over all timescales, instead of the conventional pump-probe method which uses delay lines for picosecond and faster time resolution, or fast detectors and oscilloscopes on longer timescales. This new method is particularly suitable for situations where the probe intensity is weak and/or the detector is noisy. When the detector is noisy, there is in principle a signal to noise advantage over conventional pump-probe methods. © The Author(s) 2016.

  9. Post-mortem chemical excitability of the iris should not be used for forensic death time diagnosis.

    PubMed

    Koehler, Katja; Sehner, Susanne; Riemer, Martin; Gehl, Axel; Raupach, Tobias; Anders, Sven

    2018-04-18

    Post-mortem chemical excitability of the iris is one of the non-temperature-based methods in forensic diagnosis of the time since death. Although several authors reported on their findings, using different measurement methods, currently used time limits are based on a single dissertation which has recently been doubted to be applicable for forensic purpose. We investigated changes in pupil-iris ratio after application of acetylcholine (n = 79) or tropicamide (n = 58) and in controls at upper and lower time limits that are suggested in the current literature, using a digital photography-based measurement method with excellent reliability. We observed "positive," "negative," and "paradox" reactions in both intervention and control conditions at all investigated post-mortem time points, suggesting spontaneous changes in pupil size to be causative for the finding. According to our observations, post-mortem chemical excitability of the iris should not be used in forensic death time estimation, as results may cause false conclusions regarding the correct time point of death and might therefore be strongly misleading.

  10. The Point of No Return

    PubMed Central

    Logan, Gordon D.

    2015-01-01

    Bartlett (1958) described the point of no return as a point of irrevocable commitment to action, which was preceded by a period of gradually increasing commitment. As such, the point of no return reflects a fundamental limit on the ability to control thought and action. I review the literature on the point of no return, taking three perspectives. First, I consider the point of no return from the perspective of the controlled act, as a locus in the architecture and anatomy of the underlying processes. I review experiments from the stop-signal paradigm that suggest that the point of no return is located late in the response system. Then I consider the point of no return from the perspective of the act of control that tries to change the controlled act before it becomes irrevocable. From this perspective, the point of no return is a point in time that provides enough “lead time” for the act of control to take effect. I review experiments that measure the response time to the stop signal as the lead time required for response inhibition in the stop-signal paradigm. Finally, I consider the point of no return in hierarchically controlled tasks, in which there may be many points of no return at different levels of the hierarchy. I review experiments on skilled typing that suggest different points of no return for the commands that determine what is typed and the countermands that inhibit typing, with increasing commitment to action the lower the level in the hierarchy. I end by considering the point of no return in perception and thought as well as action. PMID:25633089

  11. On the design of a radix-10 online floating-point multiplier

    NASA Astrophysics Data System (ADS)

    McIlhenny, Robert D.; Ercegovac, Milos D.

    2009-08-01

    This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.

  12. Moving Force Identification: a Time Domain Method

    NASA Astrophysics Data System (ADS)

    Law, S. S.; Chan, T. H. T.; Zeng, Q. H.

    1997-03-01

    The solution for the vertical dynamic interaction forces between a moving vehicle and the bridge deck is analytically derived and experimentally verified. The deck is modelled as a simply supported beam with viscous damping, and the vehicle/bridge interaction force is modelled as one-point or two-point loads with fixed axle spacing, moving at constant speed. The method is based on modal superposition and is developed to identify the forces in the time domain. Both cases of one-point and two-point forces moving on a simply supported beam are simulated. Results of laboratory tests on the identification of the vehicle/bridge interaction forces are presented. Computation simulations and laboratory tests show that the method is effective, and acceptable results can be obtained by combining the use of bending moment and acceleration measurements.

  13. On-line noninvasive one-point measurements of pulse wave velocity.

    PubMed

    Harada, Akimitsu; Okada, Takashi; Niki, Kiyomi; Chang, Dehua; Sugawara, Motoaki

    2002-12-01

    Pulse wave velocity (PWV) is a basic parameter in the dynamics of pressure and flow waves traveling in arteries. Conventional on-line methods of measuring PWV have mainly been based on "two-point" measurements, i.e., measurements of the time of travel of the wave over a known distance. This paper describes two methods by which on-line "one-point" measurements can be made, and compares the results obtained by the two methods. The principle of one method is to measure blood pressure and velocity at a point, and use the water-hammer equation for forward traveling waves. The principle of the other method is to derive PWV from the stiffness parameter of the artery. Both methods were realized by using an ultrasonic system which we specially developed for noninvasive measurements of wave intensity. We applied the methods to the common carotid artery in 13 normal humans. The regression line of the PWV (m/s) obtained by the former method on the PWV (m/s) obtained by the latter method was y = 1.03x - 0.899 (R(2) = 0.83). Although regional PWV in the human carotid artery has not been reported so far, the correlation between the PWVs obtained by the present two methods was so high that we are convinced of the validity of these methods.

  14. SU-F-T-08: Brachytherapy Film Dosimetry in a Water Phantom for a Ring and Tandem HDR Applicator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, B; Grelewicz, Z; Kang, Z

    2016-06-15

    Purpose: The feasibility of dose measurement using new generation EBT3 film was explored in a water phantom for a ring and tandem HDR applicator for measurements tracking mucosal dose during cervical brachytherapy. Methods: An experimental fixture was assembled to position the applicator in a water phantom. Prior to measurement, calibration curves for EBT3 film in water and in solidwater were verified. EBT3 film was placed at different known locations around the applicator in the water tank. A CT scan of the phantom with applicator was performed using clinical protocol. A typical cervical cancer treatment plan was then generated by Oncentramore » brachytherapy planning system. A dose of 500 cGy was prescribed to point A (2 cm, 2 cm). Locations measured by film included the outer surface of the ring, measurement point A-m (2.2 cm, 2.2 cm), and profiles extending from point A-m parallel to the tandem. Three independent measurements were conducted. The doses recorded by film were carefully analyzed and compared with values calculated by the treatment planning system. Results: Assessment of the EBT3 films indicate that the dose at point A matches the values predicted by the planning system. Dose to the point A-m was 411.5 cGy, and the outer circumferential surface dose of the ring was between 500 and 1150 cGy. It was found that from the point A-m, the dose drops 60% within 4.5 cm on the line parallel to the tandem. The measurement doses agree with the treatment planning system. Conclusion: Use of EBT3 film is feasible for in-water measurements for brachytherapy. A carefully machined apparatus will likely improve measurement accuracy. In a typical plan, our study found that the ring surface dose can be 2.5 times larger than the point A prescription dose. EBT3 film can be used to monitor mucosal dose in brachytherapy treatments.« less

  15. Method and apparatus for ultrasonic characterization through the thickness direction of a moving web

    DOEpatents

    Jackson, Theodore; Hall, Maclin S.

    2001-01-01

    A method and apparatus for determining the caliper and/or the ultrasonic transit time through the thickness direction of a moving web of material using ultrasonic pulses generated by a rotatable wheel ultrasound apparatus. The apparatus includes a first liquid-filled tire and either a second liquid-filled tire forming a nip or a rotatable cylinder that supports a thin moving web of material such as a moving web of paper and forms a nip with the first liquid-filled tire. The components of ultrasonic transit time through the tires and fluid held within the tires may be resolved and separately employed to determine the separate contributions of the two tire thicknesses and the two fluid paths to the total path length that lies between two ultrasonic transducer surfaces contained within the tires in support of caliper measurements. The present invention provides the benefit of obtaining a transit time and caliper measurement at any point in time as a specimen passes through the nip of rotating tires and eliminates inaccuracies arising from nonuniform tire circumferential thickness by accurately retaining point-to-point specimen transit time and caliper variation information, rather than an average obtained through one or more tire rotations. Morever, ultrasonic transit time through the thickness direction of a moving web may be determined independent of small variations in the wheel axle spacing, tire thickness, and liquid and tire temperatures.

  16. Singular trajectories: space-time domain topology of developing speckle fields

    NASA Astrophysics Data System (ADS)

    Vasil'ev, Vasiliy; Soskin, Marat S.

    2010-02-01

    It is shown the space-time dynamics of optical singularities is fully described by singularities trajectories in space-time domain, or evolution of transverse coordinates(x, y) in some fixed plane z0. The dynamics of generic developing speckle fields was realized experimentally by laser induced scattering in LiNbO3:Fe photorefractive crystal. The space-time trajectories of singularities can be divided topologically on two classes with essentially different scenario and duration. Some of them (direct topological reactions) consist from nucleation of singularities pair at some (x, y, z0, t) point, their movement and annihilation. They possess form of closed loops with relatively short time of existence. Another much more probable class of trajectories are chain topological reactions. Each of them consists from sequence of links, i.e. of singularities nucleation in various points (xi yi, ti) and following annihilation of both singularities in other space-time points with alien singularities of opposite topological indices. Their topology and properties are established. Chain topological reactions can stop on the borders of a developing speckle field or go to infinity. Examples of measured both types of topological reactions for optical vortices (polarization C points) in scalar (elliptically polarized) natural developing speckle fields are presented.

  17. Experimental and numerical investigation of feed-point parameters in a 3-D hyperthermia applicator using different FDTD models of feed networks.

    PubMed

    Nadobny, Jacek; Fähling, Horst; Hagmann, Mark J; Turner, Paul F; Wlodarczyk, Waldemar; Gellermann, Johanna M; Deuflhard, Peter; Wust, Peter

    2002-11-01

    Experimental and numerical methods were used to determine the coupling of energy in a multichannel three-dimensional hyperthermia applicator (SIGMA-Eye), consisting of 12 short dipole antenna pairs with stubs for impedance matching. The relationship between the amplitudes and phases of the forward waves from the amplifiers, to the resulting amplitudes and phases at the antenna feed-points was determined in terms of interaction matrices. Three measuring methods were used: 1) a differential probe soldered directly at the antenna feed-points; 2) an E-field sensor placed near the feed-points; and 3) measurements were made at the outputs of the amplifier. The measured data were compared with finite-difference time-domain (FDTD) calculations made with three different models. The first model assumes that single antennas are fed independently. The second model simulates antenna pairs connected to the transmission lines. The measured data correlate best with the latter FDTD model, resulting in an improvement of more than 20% and 20 degrees (average difference in amplitudes and phases) when compared with the two simpler FDTD models.

  18. Impact of urine preservation methods and duration of storage on measured levels of environmental contaminants.

    PubMed

    Hoppin, Jane A; Ulmer, Ross; Calafat, Antonia M; Barr, Dana B; Baker, Susan V; Meltzer, Helle M; Rønningen, Kjersti S

    2006-01-01

    Collection of urine samples in human studies involves choices regarding shipping, sample preservation, and storage that may ultimately influence future analysis. As more studies collect and archive urine samples to evaluate environmental exposures in the future, we were interested in assessing the impact of urine preservative, storage temperature, and time since collection on nonpersistent contaminants in urine samples. In spiked urine samples stored in three types of urine vacutainers (no preservative, boric acid, and chlorhexidine), we measured five groups of contaminants to assess the levels of these analytes at five time points (0, 24, 48, and 72 h, and 1 week) and at two temperatures (room temperature and 4 degrees C). The target chemicals were bisphenol A (BPA), metabolites of organophosphate (OP), carbamate, and pyrethroid insecticides, chlorinated phenols, and phthalate monoesters, and were measured using five different mass spectrometry-based methods. Three samples were analyzed at each time point, with the exception of BPA. Repeated measures analysis of variance was used to evaluate effects of storage time, temperature, and preservative. Stability was summarized with percent change in mean concentration from time 0. In general, most analytes were stable under all conditions with changes in mean concentration over time, temperature, and preservative being generally less than 20%, with the exception of the OP metabolites in the presence of boric acid. The effect of storage temperature was less important than time since collection. The precision of the laboratory measurements was high allowing us to observe small differences, which may not be important when categorizing individuals into broader exposure groups.

  19. Moire technique utilization for detection and measurement of scoliosis

    NASA Astrophysics Data System (ADS)

    Zawieska, Dorota; Podlasiak, Piotr

    1993-02-01

    Moire projection method enables non-contact measurement of the shape or deformation of different surfaces and constructions by fringe pattern analysis. The fringe map acquisition of the whole surface of the object under test is one of the main advantages compared with 'point by point' methods. The computer analyzes the shape of the whole surface and next user can selected different points or cross section of the object map. In this paper a few typical examples of an application of the moire technique in solving different medical problems will be presented. We will also present to you the equipment the moire pattern analysis is done in real time using the phase stepping method with CCD camera.

  20. Freeze-drying process design by manometric temperature measurement: design of a smart freeze-dryer.

    PubMed

    Tang, Xiaolin Charlie; Nail, Steven L; Pikal, Michael J

    2005-04-01

    To develop a procedure based on manometric temperature measurement (MTM) and an expert system for good practices in freeze drying that will allow development of an optimized freeze-drying process during a single laboratory freeze-drying experiment. Freeze drying was performed with a FTS Dura-Stop/Dura-Top freeze dryer with the manometric temperature measurement software installed. Five percent solutions of glycine, sucrose, or mannitol with 2 ml to 4 ml fill in 5 ml vials were used, with all vials loaded on one shelf. Details of freezing, optimization of chamber pressure, target product temperature, and some aspects of secondary drying are determined by the expert system algorithms. MTM measurements were used to select the optimum shelf temperature, to determine drying end points, and to evaluate residual moisture content in real-time. MTM measurements were made at 1 hour or half-hour intervals during primary drying and secondary drying, with a data collection frequency of 4 points per second. The improved MTM equations were fit to pressure-time data generated by the MTM procedure using Microcal Origin software to obtain product temperature and dry layer resistance. Using heat and mass transfer theory, the MTM results were used to evaluate mass and heat transfer rates and to estimate the shelf temperature required to maintain the target product temperature. MTM product dry layer resistance is accurate until about two-thirds of total primary drying time is over, and the MTM product temperature is normally accurate almost to the end of primary drying provided that effective thermal shielding is used in the freeze-drying process. The primary drying times can be accurately estimated from mass transfer rates calculated very early in the run, and we find the target product temperature can be achieved and maintained with only a few adjustments of shelf temperature. The freeze-dryer overload conditions can be estimated by calculation of heat/mass flow at the target product temperature. It was found that the MTM results serve as an excellent indicator of the end point of primary drying. Further, we find that the rate of water desorption during secondary drying may be accurately measured by a variation of the basic MTM procedure. Thus, both the end point of secondary drying and real-time residual moisture may be obtained during secondary drying. Manometric temperature measurement and the expert system for good practices in freeze drying does allow development of an optimized freeze-drying process during a single laboratory freeze-drying experiment.

  1. Job Demands, Burnout, and Teamwork in Healthcare Professionals Working in a General Hospital that Was Analysed At Two Points in Time.

    PubMed

    Mijakoski, Dragan; Karadzhinska-Bislimovska, Jovanka; Stoleski, Sasho; Minov, Jordan; Atanasovska, Aneta; Bihorac, Elida

    2018-04-15

    The purpose of the paper was to assess job demands, burnout, and teamwork in healthcare professionals (HPs) working in a general hospital that was analysed at two points in time with a time lag of three years. Time 1 respondents (N = 325) were HPs who participated during the first wave of data collection (2011). Time 2 respondents (N = 197) were HPs from the same hospital who responded at Time 2 (2014). Job demands, burnout, and teamwork were measured with Hospital Experience Scale, Maslach Burnout Inventory, and Hospital Survey on Patient Safety Culture, respectively. Significantly higher scores of emotional exhaustion (21.03 vs. 15.37, t = 5.1, p < 0.001), depersonalization (4.48 vs. 2.75, t = 3.8, p < 0.001), as well as organizational (2.51 vs. 2.34, t = 2.38, p = 0.017), emotional (2.46 vs. 2.25, t = 3.68, p < 0.001), and cognitive (2.82 vs. 2.64, t = 2.68, p = 0.008) job demands were found at Time 2. Teamwork levels were similar at both points in time (Time 1 = 3.84 vs. Time 2 = 3.84, t = 0.043, p = 0.97). Actual longitudinal study revealed significantly higher mean values of emotional exhaustion and depersonalization in 2014 that could be explained by significantly increased job demands between analysed points in time.

  2. The Geodetic Monitoring of the Engineering Structure - A Practical Solution of the Problem in 3D Space

    NASA Astrophysics Data System (ADS)

    Filipiak-Kowszyk, Daria; Janowski, Artur; Kamiński, Waldemar; Makowska, Karolina; Szulwic, Jakub; Wilde, Krzysztof

    2016-12-01

    The study raises the issues concerning the automatic system designed for the monitoring of movement of controlled points, located on the roof covering of the Forest Opera in Sopot. It presents the calculation algorithm proposed by authors. It takes into account the specific design and location of the test object. High forest stand makes it difficult to use distant reference points. Hence the reference points used to study the stability of the measuring position are located on the ground elements of the sixmeter-deep concrete foundations, from which the steel arches are derived to support the roof covering (membrane) of the Forest Opera. The tacheometer used in the measurements is located in the glass body placed on a special platform attached to the steel arcs. Measurements of horizontal directions, vertical angles and distances can be additionally subject to errors caused by the laser beam penetration through the glass. Dynamic changes of weather conditions, including the temperature and pressure also have a significant impact on the value of measurement errors, and thus the accuracy of the final determinations represented by the relevant covariance matrices. The estimated coordinates of the reference points, controlled points and tacheometer along with the corresponding covariance matrices obtained from the calculations in the various epochs are used to determine the significance of acquired movements. In case of the stability of reference points, the algorithm assumes the ability to study changes in the position of tacheometer in time, on the basis of measurements performed on these points.

  3. Electric-Field Instrument With Ac-Biased Corona Point

    NASA Technical Reports Server (NTRS)

    Markson, R.; Anderson, B.; Govaert, J.

    1993-01-01

    Measurements indicative of incipient lightning yield additional information. New instrument gives reliable readings. High-voltage ac bias applied to needle point through high-resistance capacitance network provides corona discharge at all times, enabling more-slowly-varying component of electrostatic potential of needle to come to equilibrium with surrounding air. High resistance of high-voltage coupling makes instrument insensitive to wind. Improved corona-point instrument expected to yield additional information assisting in safety-oriented forecasting of lighting.

  4. 40 CFR 1054.145 - Are there interim provisions that apply only for a limited time?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... scheduled emission-related maintenance falls within 10 hours of a test point, delay the maintenance until the engine reaches the test point. Measure emissions before and after peforming the maintenance. Use... data under 40 CFR 1060.235(e) for your emission family. (j) Continued use of 40 CFR part 90 test...

  5. 40 CFR 1054.145 - Are there interim provisions that apply only for a limited time?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... scheduled emission-related maintenance falls within 10 hours of a test point, delay the maintenance until the engine reaches the test point. Measure emissions before and after peforming the maintenance. Use... data under 40 CFR 1060.235(e) for your emission family. (j) Continued use of 40 CFR part 90 test...

  6. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... You may extend the sampling time to improve measurement accuracy of PM emissions, using good..., you may omit speed, torque, and power points from the duty-cycle regression statistics if the... mapped. (2) For variable-speed engines without low-speed governors, you may omit torque and power points...

  7. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... You may extend the sampling time to improve measurement accuracy of PM emissions, using good..., you may omit speed, torque, and power points from the duty-cycle regression statistics if the... mapped. (2) For variable-speed engines without low-speed governors, you may omit torque and power points...

  8. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    PubMed

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  9. Instantaneous Transfer Entropy for the Study of Cardiovascular and Cardiorespiratory Nonstationary Dynamics.

    PubMed

    Valenza, Gaetano; Faes, Luca; Citi, Luca; Orini, Michele; Barbieri, Riccardo

    2018-05-01

    Measures of transfer entropy (TE) quantify the direction and strength of coupling between two complex systems. Standard approaches assume stationarity of the observations, and therefore are unable to track time-varying changes in nonlinear information transfer with high temporal resolution. In this study, we aim to define and validate novel instantaneous measures of TE to provide an improved assessment of complex nonstationary cardiorespiratory interactions. We here propose a novel instantaneous point-process TE (ipTE) and validate its assessment as applied to cardiovascular and cardiorespiratory dynamics. In particular, heartbeat and respiratory dynamics are characterized through discrete time series, and modeled with probability density functions predicting the time of the next physiological event as a function of the past history. Likewise, nonstationary interactions between heartbeat and blood pressure dynamics are characterized as well. Furthermore, we propose a new measure of information transfer, the instantaneous point-process information transfer (ipInfTr), which is directly derived from point-process-based definitions of the Kolmogorov-Smirnov distance. Analysis on synthetic data, as well as on experimental data gathered from healthy subjects undergoing postural changes confirms that ipTE, as well as ipInfTr measures are able to dynamically track changes in physiological systems coupling. This novel approach opens new avenues in the study of hidden, transient, nonstationary physiological states involving multivariate autonomic dynamics in cardiovascular health and disease. The proposed method can also be tailored for the study of complex multisystem physiology (e.g., brain-heart or, more in general, brain-body interactions).

  10. Development of a Multi-Point Microwave Interferometry (MPMI) Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Specht, Paul Elliott; Cooper, Marcia A.; Jilek, Brook Anton

    2015-09-01

    A multi-point microwave interferometer (MPMI) concept was developed for non-invasively tracking a shock, reaction, or detonation front in energetic media. Initially, a single-point, heterodyne microwave interferometry capability was established. The design, construction, and verification of the single-point interferometer provided a knowledge base for the creation of the MPMI concept. The MPMI concept uses an electro-optic (EO) crystal to impart a time-varying phase lag onto a laser at the microwave frequency. Polarization optics converts this phase lag into an amplitude modulation, which is analyzed in a heterodyne interfer- ometer to detect Doppler shifts in the microwave frequency. A version of themore » MPMI was constructed to experimentally measure the frequency of a microwave source through the EO modulation of a laser. The successful extraction of the microwave frequency proved the underlying physical concept of the MPMI design, and highlighted the challenges associated with the longer microwave wavelength. The frequency measurements made with the current equipment contained too much uncertainty for an accurate velocity measurement. Potential alterations to the current construction are presented to improve the quality of the measured signal and enable multiple accurate velocity measurements.« less

  11. Terahertz radar cross section measurements.

    PubMed

    Iwaszczuk, Krzysztof; Heiselberg, Henning; Jepsen, Peter Uhd

    2010-12-06

    We perform angle- and frequency-resolved radar cross section (RCS) measurements on objects at terahertz frequencies. Our RCS measurements are performed on a scale model aircraft of size 5-10 cm in polar and azimuthal configurations, and correspond closely to RCS measurements with conventional radar on full-size objects. The measurements are performed in a terahertz time-domain system with freely propagating terahertz pulses generated by tilted pulse front excitation of lithium niobate crystals and measured with sub-picosecond time resolution. The application of a time domain system provides ranging information and also allows for identification of scattering points such as weaponry attached to the aircraft. The shapes of the models and positions of reflecting parts are retrieved by the filtered back projection algorithm.

  12. Random phase detection in multidimensional NMR.

    PubMed

    Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C

    2011-10-04

    Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.

  13. An automated approach to measuring child movement and location in the early childhood classroom.

    PubMed

    Irvin, Dwight W; Crutchfield, Stephen A; Greenwood, Charles R; Kearns, William D; Buzhardt, Jay

    2018-06-01

    Children's movement is an important issue in child development and outcome in early childhood research, intervention, and practice. Digital sensor technologies offer improvements in naturalistic movement measurement and analysis. We conducted validity and feasibility testing of a real-time, indoor mapping and location system (Ubisense, Inc.) within a preschool classroom. Real-time indoor mapping has several implications with respect to efficiently and conveniently: (a) determining the activity areas where children are spending the most and least time per day (e.g., music); and (b) mapping a focal child's atypical real-time movements (e.g., lapping behavior). We calibrated the accuracy of Ubisense point-by-point location estimates (i.e., X and Y coordinates) against laser rangefinder measurements using several stationary points and atypical movement patterns as reference standards. Our results indicate that activity areas occupied and atypical movement patterns could be plotted with an accuracy of 30.48 cm (1 ft) using a Ubisense transponder tag attached to the participating child's shirt. The accuracy parallels findings of other researchers employing Ubisense to study atypical movement patterns in individuals at risk for dementia in an assisted living facility. The feasibility of Ubisense was tested in an approximately 90-min assessment of two children, one typically developing and one with Down syndrome, during natural classroom activities, and the results proved positive. Implications for employing Ubisense in early childhood classrooms as a data-based decision-making tool to support children's development and its potential integration with other wearable sensor technologies are discussed.

  14. Comparison of computation time and image quality between full-parallax 4G-pixels CGHs calculated by the point cloud and polygon-based method

    NASA Astrophysics Data System (ADS)

    Nakatsuji, Noriaki; Matsushima, Kyoji

    2017-03-01

    Full-parallax high-definition CGHs composed of more than billion pixels were so far created only by the polygon-based method because of its high performance. However, GPUs recently allow us to generate CGHs much faster by the point cloud. In this paper, we measure computation time of object fields for full-parallax high-definition CGHs, which are composed of 4 billion pixels and reconstruct the same scene, by using the point cloud with GPU and the polygon-based method with CPU. In addition, we compare the optical and simulated reconstructions between CGHs created by these techniques to verify the image quality.

  15. Terahertz time-domain spectroscopy of edible oils

    PubMed Central

    Valchev, Dimitar G.

    2017-01-01

    Chemical degradation of edible oils has been studied using conventional spectroscopic methods spanning the spectrum from ultraviolet to mid-IR. However, the possibility of morphological changes of oil molecules that can be detected at terahertz frequencies is beginning to receive some attention. Furthermore, the rapidly decreasing cost of this technology and its capability for convenient, in situ measurement of material properties, raises the possibility of monitoring oil during cooking and processing at production facilities, and more generally within the food industry. In this paper, we test the hypothesis that oil undergoes chemical and physical changes when heated above the smoke point, which can be detected in the 0.05–2 THz spectral range, measured using the conventional terahertz time-domain spectroscopy technique. The measurements demonstrate a null result in that there is no significant change in the spectra of terahertz optical parameters after heating above the smoke point for 5 min. PMID:28680681

  16. Terahertz time-domain spectroscopy of edible oils

    NASA Astrophysics Data System (ADS)

    Dinovitser, Alex; Valchev, Dimitar G.; Abbott, Derek

    2017-06-01

    Chemical degradation of edible oils has been studied using conventional spectroscopic methods spanning the spectrum from ultraviolet to mid-IR. However, the possibility of morphological changes of oil molecules that can be detected at terahertz frequencies is beginning to receive some attention. Furthermore, the rapidly decreasing cost of this technology and its capability for convenient, in situ measurement of material properties, raises the possibility of monitoring oil during cooking and processing at production facilities, and more generally within the food industry. In this paper, we test the hypothesis that oil undergoes chemical and physical changes when heated above the smoke point, which can be detected in the 0.05-2 THz spectral range, measured using the conventional terahertz time-domain spectroscopy technique. The measurements demonstrate a null result in that there is no significant change in the spectra of terahertz optical parameters after heating above the smoke point for 5 min.

  17. Terahertz time-domain spectroscopy of edible oils.

    PubMed

    Dinovitser, Alex; Valchev, Dimitar G; Abbott, Derek

    2017-06-01

    Chemical degradation of edible oils has been studied using conventional spectroscopic methods spanning the spectrum from ultraviolet to mid-IR. However, the possibility of morphological changes of oil molecules that can be detected at terahertz frequencies is beginning to receive some attention. Furthermore, the rapidly decreasing cost of this technology and its capability for convenient, in situ measurement of material properties, raises the possibility of monitoring oil during cooking and processing at production facilities, and more generally within the food industry. In this paper, we test the hypothesis that oil undergoes chemical and physical changes when heated above the smoke point, which can be detected in the 0.05-2 THz spectral range, measured using the conventional terahertz time-domain spectroscopy technique. The measurements demonstrate a null result in that there is no significant change in the spectra of terahertz optical parameters after heating above the smoke point for 5 min.

  18. Social Cognitive Theory Predictors of Exercise Behavior in Endometrial Cancer Survivors

    PubMed Central

    Basen-Engquist, Karen; Carmack, Cindy L.; Li, Yisheng; Brown, Jubilee; Jhingran, Anuja; Hughes, Daniel C.; Perkins, Heidi Y.; Scruggs, Stacie; Harrison, Carol; Baum, George; Bodurka, Diane C.; Waters, Andrew

    2014-01-01

    Objective This study evaluated whether social cognitive theory (SCT) variables, as measured by questionnaire and ecological momentary assessment (EMA), predicted exercise in endometrial cancer survivors. Methods One hundred post-treatment endometrial cancer survivors received a 6-month home-based exercise intervention. EMAs were conducted using hand-held computers for 10- to 12-day periods every 2 months. Participants rated morning self-efficacy and positive and negative outcome expectations using the computer, recorded exercise information in real time and at night, and wore accelerometers. At the midpoint of each assessment period participants completed SCT questionnaires. Using linear mixed-effects models, we tested whether morning SCT variables predicted minutes of exercise that day (Question 1) and whether exercise minutes at time point Tj could be predicted by questionnaire measures of SCT variables from time point Tj-1 (Question 2). Results Morning self-efficacy significantly predicted that day’s exercise minutes (p<.0001). Morning positive outcome expectations was also associated with exercise minutes (p=0.0003), but the relationship was attenuated when self-efficacy was included in the model (p=0.4032). Morning negative outcome expectations was not associated with exercise minutes. Of the questionnaire measures of SCT variables, only exercise self-efficacy predicted exercise at the next time point (p=0.003). Conclusions The consistency of the relationship between self-efficacy and exercise minutes over short (same day) and longer (Tj to Tj-1) time periods provides support for a causal relationship. The strength of the relationship between morning self-efficacy and exercise minutes suggest that real-time interventions that target daily variation in self-efficacy may benefit endometrial cancer survivors’ exercise adherence. PMID:23437853

  19. TIME-INTERVAL MEASURING DEVICE

    DOEpatents

    Gross, J.E.

    1958-04-15

    An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

  20. Handheld laser scanner automatic registration based on random coding

    NASA Astrophysics Data System (ADS)

    He, Lei; Yu, Chun-ping; Wang, Li

    2011-06-01

    Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser scanner. Theory analysis and experiment shows the method is reasonable and efficient.

  1. Combining near-field scanning optical microscopy with spectral interferometry for local characterization of the optical electric field in photonic structures.

    PubMed

    Trägårdh, Johanna; Gersen, Henkjan

    2013-07-15

    We show how a combination of near-field scanning optical microscopy with crossed beam spectral interferometry allows a local measurement of the spectral phase and amplitude of light propagating in photonic structures. The method only requires measurement at the single point of interest and at a reference point, to correct for the relative phase of the interferometer branches, to retrieve the dispersion properties of the sample. Furthermore, since the measurement is performed in the spectral domain, the spectral phase and amplitude could be retrieved from a single camera frame, here in 70 ms for a signal power of less than 100 pW limited by the dynamic range of the 8-bit camera. The method is substantially faster than most previous time-resolved NSOM methods that are based on time-domain interferometry, which also reduced problems with drift. We demonstrate how the method can be used to measure the refractive index and group velocity in a waveguide structure.

  2. Validation of accelerometer cut points in toddlers with and without cerebral palsy.

    PubMed

    Oftedal, Stina; Bell, Kristie L; Davies, Peter S W; Ware, Robert S; Boyd, Roslyn N

    2014-09-01

    The purpose of this study was to validate uni- and triaxial ActiGraph cut points for sedentary time in toddlers with cerebral palsy (CP) and typically developing children (TDC). Children (n = 103, 61 boys, mean age = 2 yr, SD = 6 months, range = 1 yr 6 months-3 yr) were divided into calibration (n = 65) and validation (n = 38) samples with separate analyses for TDC (n = 28) and ambulant (Gross Motor Function Classification System I-III, n = 51) and nonambulant (Gross Motor Function Classification System IV-V, n = 25) children with CP. An ActiGraph was worn during a videotaped assessment. Behavior was coded as sedentary or nonsedentary. Receiver operating characteristic-area under the curve analysis determined the classification accuracy of accelerometer data. Predictive validity was determined using the Bland-Altman analysis. Classification accuracy for uniaxial data was fair for the ambulatory CP and TDC group but poor for the nonambulatory CP group. Triaxial data showed good classification accuracy for all groups. The uniaxial ambulatory CP and TDC cut points significantly overestimated sedentary time (bias = -10.5%, 95% limits of agreement [LoA] = -30.2% to 9.1%; bias = -17.3%, 95% LoA = -44.3% to 8.3%). The triaxial ambulatory and nonambulatory CP and TDC cut points provided accurate group-level measures of sedentary time (bias = -1.5%, 95% LoA = -20% to 16.8%; bias = 2.1%, 95% LoA = -17.3% to 21.5%; bias = -5.1%, 95% LoA = -27.5% to 16.1%). Triaxial accelerometers provide useful group-level measures of sedentary time in children with CP across the spectrum of functional abilities and TDC. Uniaxial cut points are not recommended.

  3. Proton radiography and proton computed tomography based on time-resolved dose measurements

    NASA Astrophysics Data System (ADS)

    Testa, Mauro; Verburg, Joost M.; Rose, Mark; Min, Chul Hee; Tang, Shikui; Hassane Bentefour, El; Paganetti, Harald; Lu, Hsiao-Ming

    2013-11-01

    We present a proof of principle study of proton radiography and proton computed tomography (pCT) based on time-resolved dose measurements. We used a prototype, two-dimensional, diode-array detector capable of fast dose rate measurements, to acquire proton radiographic images expressed directly in water equivalent path length (WEPL). The technique is based on the time dependence of the dose distribution delivered by a proton beam traversing a range modulator wheel in passive scattering proton therapy systems. The dose rate produced in the medium by such a system is periodic and has a unique pattern in time at each point along the beam path and thus encodes the WEPL. By measuring the time dose pattern at the point of interest, the WEPL to this point can be decoded. If one measures the time-dose patterns at points on a plane behind the patient for a beam with sufficient energy to penetrate the patient, the obtained 2D distribution of the WEPL forms an image. The technique requires only a 2D dosimeter array and it uses only the clinical beam for a fraction of second with negligible dose to patient. We first evaluated the accuracy of the technique in determining the WEPL for static phantoms aiming at beam range verification of the brain fields of medulloblastoma patients. Accurate beam ranges for these fields can significantly reduce the dose to the cranial skin of the patient and thus the risk of permanent alopecia. Second, we investigated the potential features of the technique for real-time imaging of a moving phantom. Real-time tumor tracking by proton radiography could provide more accurate validations of tumor motion models due to the more sensitive dependence of proton beam on tissue density compared to x-rays. Our radiographic technique is rapid (˜100 ms) and simultaneous over the whole field, it can image mobile tumors without the problem of interplay effect inherently challenging for methods based on pencil beams. Third, we present the reconstructed pCT images of a cylindrical phantom containing inserts of different materials. As for all conventional pCT systems, the method illustrated in this work produces tomographic images that are potentially more accurate than x-ray CT in providing maps of proton relative stopping power (RSP) in the patient without the need for converting x-ray Hounsfield units to proton RSP. All phantom tests produced reasonable results, given the currently limited spatial and time resolution of the prototype detector. The dose required to produce one radiographic image, with the current settings, is ˜0.7 cGy. Finally, we discuss a series of techniques to improve the resolution and accuracy of radiographic and tomographic images for the future development of a full-scale detector.

  4. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  5. Evaluation of salivary melatonin measurements for Dim Light Melatonin Onset calculations in patients with possible sleep-wake rhythm disorders.

    PubMed

    Keijzer, Henry; Smits, Marcel G; Peeters, Twan; Looman, Caspar W N; Endenburg, Silvia C; Gunnewiek, Jacqueline M T Klein

    2011-08-17

    Dim Light Melatonin Onset (DLMO) can be calculated within a 5-point partial melatonin curve in saliva collected at home. We retrospectively analyzed the patient melatonin measurements sample size of the year 2008 to evaluate these DLMO calculations and studied the correlation between diary or polysomnography (PSG) sleep onset and DLMO. Patients completed an online questionnaire. If this questionnaire pointed to a possible Delayed Sleep Phase Disorder (DSPD), saliva collection devices were sent to the patient. Collection occurred at 5 consecutive hours. Melatonin concentration was measured with a radioimmunoassay and DLMO was defined as the time at which the melatonin concentration in saliva reaches 4 pg/mL. Sleep onset time was retrieved from an online one-week sleep diary and/or one-night PSG. A total of 1848 diagnostic 5-point curves were obtained. DLMO could be determined in 76.2% (n=1408). DLMO significantly differed between different age groups and increased with age. Pearson correlations (r) between DLMO and sleep onset measured with PSG or with a diary were 0.514 (p=<0.001, n=54) and 0.653 (p=0.002, n=20) respectively. DLMO can be reliably measured in saliva that is conveniently collected at home. DLMO correlates moderately with sleep onset. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Modification and fixed-point analysis of a Kalman filter for orientation estimation based on 9D inertial measurement unit data.

    PubMed

    Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger

    2013-01-01

    A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.

  7. Real-Time Unsteady Loads Measurements Using Hot-Film Sensors

    NASA Technical Reports Server (NTRS)

    Mangalam, Arun S.; Moes, Timothy R.

    2004-01-01

    Several flight-critical aerodynamic problems such as buffet, flutter, stall, and wing rock are strongly affected or caused by abrupt changes in unsteady aerodynamic loads and moments. Advanced sensing and flow diagnostic techniques have made possible simultaneous identification and tracking, in real-time, of the critical surface, viscosity-related aerodynamic phenomena under both steady and unsteady flight conditions. The wind tunnel study reported here correlates surface hot-film measurements of leading edge stagnation point and separation point, with unsteady aerodynamic loads on a NACA 0015 airfoil. Lift predicted from the correlation model matches lift obtained from pressure sensors for an airfoil undergoing harmonic pitchup and pitchdown motions. An analytical model was developed that demonstrates expected stall trends for pitchup and pitchdown motions. This report demonstrates an ability to obtain unsteady aerodynamic loads in real-time, which could lead to advances in air vehicle safety, performance, ride-quality, control, and health management.

  8. 77 FR 71842 - Notice of Permit Applications Received Under the Antarctic Conservation Act of 1978 (Pub. L. 95-541)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-04

    ..., as well as 500 grams of Artemia salina cysts as food for krill. They plan to measure how fast DNA is... krill at a series of later time points. By measuring how much of the prey DNA is left in the krill guts after various amounts of time since feeding, they can calculate how quickly the DNA was digested...

  9. Failure to Apply the Flynn Correction in Death Penalty Litigation: Standard Practice of Today Maybe, but Certainly Malpractice of Tomorrow

    ERIC Educational Resources Information Center

    Reynolds, Cecil R.; Niland, John; Wright, John E.; Rosenn, Michal

    2010-01-01

    The Flynn Effect is a well documented phenomenon demonstrating score increases on IQ measures over time that average about 0.3 points per year. Normative adjustments to scores derived from IQ measures normed more than a year or so prior to the time of testing an individual have become controversial in several settings but especially so in matters…

  10. Analysis of ground-measured and passive-microwave-derived snow depth variations in midwinter across the Northern Great Plains

    USGS Publications Warehouse

    Chang, A.T.C.; Kelly, R.E.J.; Josberger, E.G.; Armstrong, R.L.; Foster, J.L.; Mognard, N.M.

    2005-01-01

    Accurate estimation of snow mass is important for the characterization of the hydrological cycle at different space and time scales. For effective water resources management, accurate estimation of snow storage is needed. Conventionally, snow depth is measured at a point, and in order to monitor snow depth in a temporally and spatially comprehensive manner, optimum interpolation of the points is undertaken. Yet the spatial representation of point measurements at a basin or on a larger distance scale is uncertain. Spaceborne scanning sensors, which cover a wide swath and can provide rapid repeat global coverage, are ideally suited to augment the global snow information. Satellite-borne passive microwave sensors have been used to derive snow depth (SD) with some success. The uncertainties in point SD and areal SD of natural snowpacks need to be understood if comparisons are to be made between a point SD measurement and satellite SD. In this paper three issues are addressed relating satellite derivation of SD and ground measurements of SD in the northern Great Plains of the United States from 1988 to 1997. First, it is shown that in comparing samples of ground-measured point SD data with satellite-derived 25 ?? 25 km2 pixels of SD from the Defense Meteorological Satellite Program Special Sensor Microwave Imager, there are significant differences in yearly SD values even though the accumulated datasets showed similarities. Second, from variogram analysis, the spatial variability of SD from each dataset was comparable. Third, for a sampling grid cell domain of 1?? ?? 1?? in the study terrain, 10 distributed snow depth measurements per cell are required to produce a sampling error of 5 cm or better. This study has important implications for validating SD derivations from satellite microwave observations. ?? 2005 American Meteorological Society.

  11. Photogrammetry research for FAST eleven-meter reflector panel surface shape measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Rongwei; Zhu, Lichun; Li, Weimin; Hu, Jingwen; Zhai, Xuebing

    2010-10-01

    In order to design and manufacture the Five-hundred-meter Aperture Spherical Radio Telescope (FAST) active reflector measuring equipment, measurement on each reflector panel surface shape was presented, static measurement of the whole neutral spherical network of nodes was performed, real-time dynamic measurement at the cable network dynamic deformation was undertaken. In the implementation process of the FAST, reflector panel surface shape detection was completed before eleven-meter reflector panel installation. Binocular vision system was constructed based on the method of binocular stereo vision in machine vision, eleven-meter reflector panel surface shape was measured with photogrammetry method. Cameras were calibrated with the feature points. Under the linearity camera model, the lighting spot array was used as calibration standard pattern, and the intrinsic and extrinsic parameters were acquired. The images were collected for digital image processing and analyzing with two cameras, feature points were extracted with the detection algorithm of characteristic points, and those characteristic points were matched based on epipolar constraint method. Three-dimensional reconstruction coordinates of feature points were analyzed and reflective panel surface shape structure was established by curve and surface fitting method. The error of reflector panel surface shape was calculated to realize automatic measurement on reflector panel surface shape. The results show that unit reflector panel surface inspection accuracy was 2.30mm, within the standard deviation error of 5.00mm. Compared with the requirement of reflector panel machining precision, photogrammetry has fine precision and operation feasibility on eleven-meter reflector panel surface shape measurement for FAST.

  12. Development of a computer-assisted system for model-based condylar position analysis (E-CPM).

    PubMed

    Ahlers, M O; Jakstat, H

    2009-01-01

    Condylar position analysis is a measuring method for the three-dimensional quantitative acquisition of the position of the mandible in different conditions or at different points in time. Originally, the measurement was done based on a model, using special mechanical condylar position measuring instruments, and on a research scale with mechanical-electronic measuring instruments. Today, as an alternative, it is possible to take measurements with electronic measuring instruments applied directly to the patient. The computerization of imaging has also facilitated condylar position measurement by means of three-dimensional data records obtained by imaging examination methods, which has been used in connection with the simulation and quantification of surgical operation results. However, the comparative measurement of the condylar position at different points in time has so far not been possible to the required degree. An electronic measuring instrument, allowing acquisition of the condylar position in clinical routine and facilitating later calibration with measurements from later examinations by data storage and use of precise equalizing systems, was therefore designed by the present authors. This measuring instrument was implemented on the basis of already existing components from the Reference CPM und Cadiax Compact articulator and registration systems (Gamma Dental, Klosterneuburg, Austria) as well as the matching CMD3D evaluation software (dentaConcept, Hamburg).

  13. Measuring the development of inhibitory control: The challenge of heterotypic continuity

    PubMed Central

    Petersen, Isaac T.; Hoyniak, Caroline P.; McQuillan, Maureen E.; Bates, John E.; Staples, Angela D.

    2016-01-01

    Inhibitory control is thought to demonstrate heterotypic continuity, in other words, continuity in its purpose or function but changes in its behavioral manifestation over time. This creates major methodological challenges for studying the development of inhibitory control in childhood including construct validity, developmental appropriateness and sensitivity of measures, and longitudinal factorial invariance. We meta-analyzed 198 studies using measures of inhibitory control, a key aspect of self-regulation, to estimate age ranges of usefulness for each measure. The inhibitory control measures showed limited age ranges of usefulness owing to ceiling/floor effects. Tasks were useful, on average, for a developmental span of less than 3 years. This suggests that measuring inhibitory control over longer spans of development may require use of different measures at different time points, seeking to measure heterotypic continuity. We suggest ways to study the development of inhibitory control, with overlapping measurement in a structural equation modeling framework and tests of longitudinal factorial or measurement invariance. However, as valuable as this would be for the area, we also point out that establishing longitudinal factorial invariance is neither sufficient nor necessary for examining developmental change. Any study of developmental change should be guided by theory and construct validity, aiming toward a better empirical and theoretical approach to the selection and combination of measures. PMID:27346906

  14. Instantaneous electron beam emittance measurement system based on the optical transition radiation principle

    NASA Astrophysics Data System (ADS)

    Jiang, Xiao-Guo; Wang, Yuan; Zhang, Kai-Zhi; Yang, Guo-Jun; Shi, Jin-Shui; Deng, Jian-Jun; Li, Jin

    2014-01-01

    One kind of instantaneous electron beam emittance measurement system based on the optical transition radiation principle and double imaging optical method has been set up. It is mainly adopted in the test for the intense electron-beam produced by a linear induction accelerator. The system features two characteristics. The first one concerns the system synchronization signal triggered by the following edge of the main output waveform from a Blumlein switch. The synchronous precision of about 1 ns between the electron beam and the image capture time can be reached in this way so that the electron beam emittance at the desired time point can be obtained. The other advantage of the system is the ability to obtain the beam spot and beam divergence in one measurement so that the calculated result is the true beam emittance at that time, which can explain the electron beam condition. It provides to be a powerful beam diagnostic method for a 2.5 kA, 18.5 MeV, 90 ns (FWHM) electron beam pulse produced by Dragon I. The ability of the instantaneous measurement is about 3 ns and it can measure the beam emittance at any time point during one beam pulse. A series of beam emittances have been obtained for Dragon I. The typical beam spot is 9.0 mm (FWHM) in diameter and the corresponding beam divergence is about 10.5 mrad.

  15. Communication: translational Brownian motion for particles of arbitrary shape.

    PubMed

    Cichocki, Bogdan; Ekiel-Jeżewska, Maria L; Wajnryb, Eligiusz

    2012-02-21

    A single Brownian particle of arbitrary shape is considered. The time-dependent translational mean square displacement W(t) of a reference point at this particle is evaluated from the Smoluchowski equation. It is shown that at times larger than the characteristic time scale of the rotational Brownian relaxation, the slope of W(t) becomes independent of the choice of a reference point. Moreover, it is proved that in the long-time limit, the slope of W(t) is determined uniquely by the trace of the translational-translational mobility matrix μ(tt) evaluated with respect to the hydrodynamic center of mobility. The result is applicable to dynamic light scattering measurements, which indeed are performed in the long-time limit. © 2012 American Institute of Physics

  16. Theoretical repeatability assessment without repetitive measurements in gradient high-performance liquid chromatography.

    PubMed

    Kotani, Akira; Tsutsumi, Risa; Shoji, Asaki; Hayashi, Yuzuru; Kusu, Fumiyo; Yamamoto, Kazuhiro; Hakamata, Hideki

    2016-07-08

    This paper puts forward a time and material-saving method for evaluating the repeatability of area measurements in gradient HPLC with UV detection (HPLC-UV), based on the function of mutual information (FUMI) theory which can theoretically provide the measurement standard deviation (SD) and detection limits through the stochastic properties of baseline noise with no recourse to repetitive measurements of real samples. The chromatographic determination of terbinafine hydrochloride and enalapril maleate is taken as an example. The best choice of the number of noise data points, inevitable for the theoretical evaluation, is shown to be 512 data points (10.24s at 50 point/s sampling rate of an A/D converter). Coupled with the relative SD (RSD) of sample injection variability in the instrument used, the theoretical evaluation is proved to give identical values of area measurement RSDs to those estimated by the usual repetitive method (n=6) over a wide concentration range of the analytes within the 95% confidence intervals of the latter RSD. The FUMI theory is not a statistical one, but the "statistical" reliability of its SD estimates (n=1) is observed to be as high as that attained by thirty-one measurements of the same samples (n=31). Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Effect of diffusion time on liver DWI: an experimental study of normal and fibrotic livers.

    PubMed

    Zhou, Iris Y; Gao, Darwin S; Chow, April M; Fan, Shujuan; Cheung, Matthew M; Ling, Changchun; Liu, Xiaobing; Cao, Peng; Guo, Hua; Man, Kwan; Wu, Ed X

    2014-11-01

    To investigate whether diffusion time (Δ) affects the diffusion measurements in liver and their sensitivity in detecting fibrosis. Liver fibrosis was induced in Sprague-Dawley rats (n = 12) by carbon tetrachloride (CCl(4)) injections. Diffusion-weighted MRI was performed longitudinally during 8-week CCl(4) administration at 7 Tesla (T) using single-shot stimulated-echo EPI with five b-values (0 to 1000 s/mm(2)) and three Δs. Apparent diffusion coefficient (ADC) and true diffusion coefficient (D(true)) were calculated by using all five b-values and large b-values, respectively. ADC and D(true) decreased with Δ for both normal and fibrotic liver at each time point. ADC and D(true) also generally decreased with the time after CCl(4) insult. The reductions in D(true) between 2-week and 4-week CCl(4) insult were larger than the ADC reductions at all Δs. At each time point, D(true) measured with long Δ (200 ms) detected the largest changes among the 3 Δs examined. Histology revealed gradual collagen deposition and presence of intracellular fat vacuoles after CCl(4) insult. Our results demonstrated the Δ dependent diffusion measurements, indicating restricted diffusion in both normal and fibrotic liver. D(true) measured with long Δ acted as a more sensitive index of the pathological alterations in liver microstructure during fibrogenesis. Copyright © 2013 Wiley Periodicals, Inc.

  18. A New Method to Compare Statistical Tree Growth Curves: The PL-GMANOVA Model and Its Application with Dendrochronological Data

    PubMed Central

    Ricker, Martin; Peña Ramírez, Víctor M.; von Rosen, Dietrich

    2014-01-01

    Growth curves are monotonically increasing functions that measure repeatedly the same subjects over time. The classical growth curve model in the statistical literature is the Generalized Multivariate Analysis of Variance (GMANOVA) model. In order to model the tree trunk radius (r) over time (t) of trees on different sites, GMANOVA is combined here with the adapted PL regression model Q = A·T+E, where for and for , A =  initial relative growth to be estimated, , and E is an error term for each tree and time point. Furthermore, Ei[–b·r]  = , , with TPR being the turning point radius in a sigmoid curve, and at is an estimated calibrating time-radius point. Advantages of the approach are that growth rates can be compared among growth curves with different turning point radiuses and different starting points, hidden outliers are easily detectable, the method is statistically robust, and heteroscedasticity of the residuals among time points is allowed. The model was implemented with dendrochronological data of 235 Pinus montezumae trees on ten Mexican volcano sites to calculate comparison intervals for the estimated initial relative growth . One site (at the Popocatépetl volcano) stood out, with being 3.9 times the value of the site with the slowest-growing trees. Calculating variance components for the initial relative growth, 34% of the growth variation was found among sites, 31% among trees, and 35% over time. Without the Popocatépetl site, the numbers changed to 7%, 42%, and 51%. Further explanation of differences in growth would need to focus on factors that vary within sites and over time. PMID:25402427

  19. Building a Lego wall: Sequential action selection.

    PubMed

    Arnold, Amy; Wing, Alan M; Rotshtein, Pia

    2017-05-01

    The present study draws together two distinct lines of enquiry into the selection and control of sequential action: motor sequence production and action selection in everyday tasks. Participants were asked to build 2 different Lego walls. The walls were designed to have hierarchical structures with shared and dissociated colors and spatial components. Participants built 1 wall at a time, under low and high load cognitive states. Selection times for correctly completed trials were measured using 3-dimensional motion tracking. The paradigm enabled precise measurement of the timing of actions, while using real objects to create an end product. The experiment demonstrated that action selection was slowed at decision boundary points, relative to boundaries where no between-wall decision was required. Decision points also affected selection time prior to the actual selection window. Dual-task conditions increased selection errors. Errors mostly occurred at boundaries between chunks and especially when these required decisions. The data support hierarchical control of sequenced behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Infant feeding attitudes of women in the United Kingdom during pregnancy and after birth.

    PubMed

    Wilkins, Carol; Ryan, Kath; Green, Josephine; Thomas, Peter

    2012-11-01

    To address the recognized low rates of breastfeeding in the United Kingdom (UK), a change in fundamental attitudes toward infant feeding might be required. This paper reports an exploration of women's attitudes toward breastfeeding at different time points in the perinatal period, undertaken as part of a larger breastfeeding evaluation study. To measure women's infant feeding attitudes at 3 stages during the perinatal period to see whether, on average, they differed over time. Using the 17-item Iowa Infant Feeding Attitudes Scale (IIFAS), this cross-sectional study measured the infant feeding attitudes of 866 UK women at 3 perinatal stages (20 and 35 weeks antenatally and 6 weeks postpartum). Mean IIFAS scores were very similar, which shows that discrete groups of women at different time points in pregnancy and postpartum appear to have the same attitudes toward infant feeding. The predominance of scores lay in the mid-range at each of the time points, which may indicate women's indecision or ambivalent feelings about infant feeding during pregnancy and the postpartum period. Action must be undertaken to target the majority of women with mid-range scores whose ambivalence may respond positively to intervention programs. The challenge is to understand what would be appropriate and acceptable to this vulnerable group of women.

  1. Brewer spectrometer total ozone column measurements in Sodankylä

    NASA Astrophysics Data System (ADS)

    Karppinen, Tomi; Lakkala, Kaisa; Karhu, Juha M.; Heikkinen, Pauli; Kivi, Rigel; Kyrö, Esko

    2016-06-01

    Brewer total ozone column measurements started in Sodankylä in May 1988, 9 months after the signing of The Montreal Protocol. The Brewer instrument has been well maintained and frequently calibrated since then to produce a high-quality ozone time series now spanning more than 25 years. The data have now been uniformly reprocessed between 1988 and 2014. The quality of the data has been assured by automatic data rejection rules as well as by manual checking. Daily mean values calculated from the highest-quality direct sun measurements are available 77 % of time with up to 75 measurements per day on clear days. Zenith sky measurements fill another 14 % of the time series and winter months are sparsely covered by moon measurements. The time series provides information to survey the evolution of Arctic ozone layer and can be used as a reference point for assessing other total ozone column measurement practices.

  2. Time-of-travel data for Nebraska streams, 1968 to 1977

    USGS Publications Warehouse

    Petri, L.R.

    1984-01-01

    This report documents the results of 10 time-of-travel studies, using ' dye-tracer ' methods, conducted on five streams in Nebraska during the period 1968 to 1977. Streams involved in the studies were the North Platte, North Loup, Elkhorn, and Big Blue Rivers and Salt Creek. Rhodamine WT dye in a 20 percent solution was used as the tracer for all 10 time-of-travel studies. Water samples were collected at several points below each injection site. Concentrations of dye in the samples were measured by determining fluorescence of the sample and comparing that value to fluorescence-concentration curves. Stream discharges were measured before and during each study. Results of each time-by-travel study are shown on two tables and on graph. The first table shows water discharge at injection and sampling sites, distance between sites, and time and rate of travel of the dye between sites. The second table provides descriptions of study sites, amounts of dye injected in the streams, actual sampling times, and actual concentrations of dye detected. The graphs for each time-of-travel study provide indications of changing travel rates between sampling sites, information on length of dye clouds, and times for dye passage past given points. (USGS)

  3. Eleventh International Laser Radar Conference, Wisconsin University-Madison, 21-25 June 1982.

    DTIC Science & Technology

    1982-06-01

    an aircraft altitude, Iif(x) is an intensity of if beat signal in the sky at the point x, I is the laser power , y is the albedo of the ground surface...the aircraft flight path 2) Minimize degradation or power loss to the input/output path 3) Provide variable scan time points at rates up to .25...water particles. A lidar measurement at a specific point , therefore, is not necessarily representative of the entire globe. This will be discussed with

  4. A General Approach to Defining Latent Growth Components

    ERIC Educational Resources Information Center

    Mayer, Axel; Steyer, Rolf; Mueller, Horst

    2012-01-01

    We present a 3-step approach to defining latent growth components. In the first step, a measurement model with at least 2 indicators for each time point is formulated to identify measurement error variances and obtain latent variables that are purged from measurement error. In the second step, we use contrast matrices to define the latent growth…

  5. Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling

    ERIC Educational Resources Information Center

    Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.

    2012-01-01

    The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…

  6. Ice nucleation in nature: supercooling point (SCP) measurements and the role of heterogeneous nucleation.

    PubMed

    Wilson, P W; Heneghan, A F; Haymet, A D J

    2003-02-01

    In biological systems, nucleation of ice from a supercooled aqueous solution is a stochastic process and always heterogeneous. The average time any solution may remain supercooled is determined only by the degree of supercooling and heterogeneous nucleation sites it encounters. Here we summarize the many and varied definitions of the so-called "supercooling point," also called the "temperature of crystallization" and the "nucleation temperature," and exhibit the natural, inherent width associated with this quantity. We describe a new method for accurate determination of the supercooling point, which takes into account the inherent statistical fluctuations of the value. We show further that many measurements on a single unchanging sample are required to make a statistically valid measure of the supercooling point. This raises an interesting difference in circumstances where such repeat measurements are inconvenient, or impossible, for example for live organism experiments. We also discuss the effect of solutes on this temperature of nucleation. Existing data appear to show that various solute species decrease the nucleation temperature somewhat more than the equivalent melting point depression. For non-ionic solutes the species appears not to be a significant factor whereas for ions the species does affect the level of decrease of the nucleation temperature.

  7. Effect of polymerization method and fabrication method on occlusal vertical dimension and occlusal contacts of complete-arch prosthesis.

    PubMed

    Lima, Ana Paula Barbosa; Vitti, Rafael Pino; Amaral, Marina; Neves, Ana Christina Claro; da Silva Concilio, Lais Regiane

    2018-04-01

    This study evaluated the dimensional stability of a complete-arch prosthesis processed by conventional method in water bath or microwave energy and polymerized by two different curing cycles. Forty maxillary complete-arch prostheses were randomly divided into four groups (n = 10): MW1 - acrylic resin cured by one microwave cycle; MW2 - acrylic resin cured by two microwave cycles: WB1 - conventional acrylic resin polymerized using one curing cycle in a water bath; WB2 - conventional acrylic resin polymerized using two curing cycles in a water bath. For evaluation of dimensional stability, occlusal vertical dimension (OVD) and area of contact points were measured in two different measurement times: before and after the polymerization method. A digital caliper was used for OVD measurement. Occlusal contact registration strips were used between maxillary and mandibular dentures to measure the contact points. The images were measured using the software IpWin32, and the differences before and after the polymerization methods were calculated. The data were statistically analyzed using the one-way ANOVA and Tukey test (α = .05). he results demonstrated significant statistical differences for OVD between different measurement times for all groups. MW1 presented the highest OVD values, while WB2 had the lowest OVD values ( P <.05). No statistical differences were found for area of contact points among the groups ( P =.7150). The conventional acrylic resin polymerized using two curing cycles in a water bath led to less difference in OVD of complete-arch prosthesis.

  8. Energy resolution of pulsed neutron beam provided by the ANNRI beamline at the J-PARC/MLF

    NASA Astrophysics Data System (ADS)

    Kino, K.; Furusaka, M.; Hiraga, F.; Kamiyama, T.; Kiyanagi, Y.; Furutaka, K.; Goko, S.; Hara, K. Y.; Harada, H.; Harada, M.; Hirose, K.; Kai, T.; Kimura, A.; Kin, T.; Kitatani, F.; Koizumi, M.; Maekawa, F.; Meigo, S.; Nakamura, S.; Ooi, M.; Ohta, M.; Oshima, M.; Toh, Y.; Igashira, M.; Katabuchi, T.; Mizumoto, M.; Hori, J.

    2014-02-01

    We studied the energy resolution of the pulsed neutron beam of the Accurate Neutron-Nucleus Reaction Measurement Instrument (ANNRI) at the Japan Proton Accelerator Research Complex/Materials and Life Science Experimental Facility (J-PARC/MLF). A simulation in the energy region from 0.7 meV to 1 MeV was performed and measurements were made at thermal (0.76-62 meV) and epithermal energies (4.8-410 eV). The neutron energy resolution of ANNRI determined by the time-of-flight technique depends on the time structure of the neutron pulse. We obtained the neutron energy resolution as a function of the neutron energy by the simulation in the two operation modes of the neutron source: double- and single-bunch modes. In double-bunch mode, the resolution deteriorates above about 10 eV because the time structure of the neutron pulse splits into two peaks. The time structures at 13 energy points from measurements in the thermal energy region agree with those of the simulation. In the epithermal energy region, the time structures at 17 energy points were obtained from measurements and agree with those of the simulation. The FWHM values of the time structures by the simulation and measurements were found to be almost consistent. In the single-bunch mode, the energy resolution is better than about 1% between 1 meV and 10 keV at a neutron source operation of 17.5 kW. These results confirm the energy resolution of the pulsed neutron beam produced by the ANNRI beamline.

  9. New Observations of Subarcsecond Photospheric Bright Points

    NASA Technical Reports Server (NTRS)

    Berger, T. E.; Schrijver, C. J.; Shine, R. A.; Tarbell, T. D.; Title, A. M.; Scharmer, G.

    1995-01-01

    We have used an interference filter centered at 4305 A within the bandhead of the CH radical (the 'G band') and real-time image selection at the Swedish Vacuum Solar Telescope on La Palma to produce very high contrast images of subarcsecond photospheric bright points at all locations on the solar disk. During the 6 day period of 1993 September 15-20 we observed active region NOAA 7581 from its appearance on the East limb to a near-disk-center position on September 20. A total of 1804 bright points were selected for analysis from the disk center image using feature extraction image processing techniques. The measured Full Width at Half Maximum (FWHM) distribution of the bright points in the image is lognormal with a modal value of 220 km (0 sec .30) and an average value of 250 km (0 sec .35). The smallest measured bright point diameter is 120 km (0 sec .17) and the largest is 600 km (O sec .69). Approximately 60% of the measured bright points are circular (eccentricity approx. 1.0), the average eccentricity is 1.5, and the maximum eccentricity corresponding to filigree in the image is 6.5. The peak contrast of the measured bright points is normally distributed. The contrast distribution variance is much greater than the measurement accuracy, indicating a large spread in intrinsic bright-point contrast. When referenced to an averaged 'quiet-Sun' area in the image, the modal contrast is 29% and the maximum value is 75%; when referenced to an average intergranular lane brightness in the image, the distribution has a modal value of 61% and a maximum of 119%. The bin-averaged contrast of G-band bright points is constant across the entire measured size range. The measured area of the bright points, corrected for pixelation and selection effects, covers about 1.8% of the total image area. Large pores and micropores occupy an additional 2% of the image area, implying a total area fraction of magnetic proxy features in the image of 3.8%. We discuss the implications of this area fraction measurement in the context of previously published measurements which show that typical active region plage has a magnetic filling factor on the order of 10% or greater. The results suggest that in the active region analyzed here, less than 50% of the small-scale magnetic flux tubes are demarcated by visible proxies such as bright points or pores.

  10. Closure measures for coarse-graining of the tent map.

    PubMed

    Pfante, Oliver; Olbrich, Eckehard; Bertschinger, Nils; Ay, Nihat; Jost, Jürgen

    2014-03-01

    We quantify the relationship between the dynamics of a time-discrete dynamical system, the tent map T and its iterations T(m), and the induced dynamics at a symbolical level in information theoretical terms. The symbol dynamics, given by a binary string s of length m, is obtained by choosing a partition point [Formula: see text] and lumping together the points [Formula: see text] s.t. T(i)(x) concurs with the i - 1th digit of s-i.e., we apply a so called threshold crossing technique. Interpreting the original dynamics and the symbolic one as different levels, this allows us to quantitatively evaluate and compare various closure measures that have been proposed for identifying emergent macro-levels of a dynamical system. In particular, we can see how these measures depend on the choice of the partition point α. As main benefit of this new information theoretical approach, we get all Markov partitions with full support of the time-discrete dynamical system induced by the tent map. Furthermore, we could derive an example of a Markovian symbol dynamics whose underlying partition is not Markovian at all, and even a whole hierarchy of Markovian symbol dynamics.

  11. Examining Differences in Patterns of Sensory and Motor Recovery After Stroke With Robotics.

    PubMed

    Semrau, Jennifer A; Herter, Troy M; Scott, Stephen H; Dukelow, Sean P

    2015-12-01

    Developing a better understanding of the trajectory and timing of stroke recovery is critical for developing patient-centered rehabilitation approaches. Here, we quantified proprioceptive and motor deficits using robotic technology during the first 6 months post stroke to characterize timing and patterns in recovery. We also make comparisons of robotic assessments to traditional clinical measures. One hundred sixteen subjects with unilateral stroke were studied at 4 time points: 1, 6, 12, and 26 weeks post stroke. Subjects performed robotic assessments of proprioceptive (position sense and kinesthesia) and motor function (unilateral reaching task and bimanual object hit task), as well as several clinical measures (Functional Independence Measure, Purdue Pegboard, and Chedoke-McMaster Stroke Assessment). One week post stroke, many subjects displayed proprioceptive (48% position sense and 68% kinesthesia) and motor impairments (80% unilateral reaching and 85% bilateral movement). Interindividual recovery on robotic measures was highly variable. However, we characterized recovery as early (normal by 6 weeks post stroke), late (normal by 26 weeks post stroke), or incomplete (impaired at 26 weeks post stroke). Proprioceptive and motor recovery often followed different timelines. Across all time points, robotic measures were correlated with clinical measures. These results highlight the need for more sensitive, targeted identification of sensory and motor deficits to optimize rehabilitation after stroke. Furthermore, the trajectory of recovery for some individuals with mild to moderate stroke may be much longer than previously considered. © 2015 American Heart Association, Inc.

  12. Income or living standard and health in Germany: different ways of measurement of relative poverty with regard to self-rated health.

    PubMed

    Pfoertner, Timo-Kolja; Andress, Hans-Juergen; Janssen, Christian

    2011-08-01

    Current study introduces the living standard concept as an alternative approach of measuring poverty and compares its explanatory power to an income-based poverty measure with regard to subjective health status of the German population. Analyses are based on the German Socio-Economic Panel (2001, 2003 and 2005) and refer to binary logistic regressions of poor subjective health status with regard to each poverty condition, their duration and their causal influence from a previous time point. To calculate the discriminate power of both poverty indicators, initially the indicators were considered separately in regression models and subsequently, both were included simultaneously. The analyses reveal a stronger poverty-health relationship for the living standard indicator. An inadequate living standard in 2005, longer spells of an inadequate living standard between 2001, 2003 and 2005 as well as an inadequate living standard at a previous time point is significantly strongly associated with subjective health than income poverty. Our results challenge conventional measurements of the relationship between poverty and health that probably has been underestimated by income measures so far.

  13. Hysteresis of Soil Point Water Retention Functions Determined by Neutron Radiography

    NASA Astrophysics Data System (ADS)

    Perfect, E.; Kang, M.; Bilheux, H.; Willis, K. J.; Horita, J.; Warren, J.; Cheng, C.

    2010-12-01

    Soil point water retention functions are needed for modeling flow and transport in partially-saturated porous media. Such functions are usually determined by inverse modeling of average water retention data measured experimentally on columns of finite length. However, the resulting functions are subject to the appropriateness of the chosen model, as well as the initial and boundary condition assumptions employed. Soil point water retention functions are rarely measured directly and when they are the focus is invariably on the main drying branch. Previous direct measurement methods include time domain reflectometry and gamma beam attenuation. Here we report direct measurements of the main wetting and drying branches of the point water retention function using neutron radiography. The measurements were performed on a coarse sand (Flint #13) packed into 2.6 cm diameter x 4 cm long aluminum cylinders at the NIST BT-2 (50 μm resolution) and ORNL-HFIR CG1D (70 μm resolution) imaging beamlines. The sand columns were saturated with water and then drained and rewetted under quasi-equilibrium conditions using a hanging water column setup. 2048 x 2048 pixel images of the transmitted flux of neutrons through the column were acquired at each imposed suction (~10-15 suction values per experiment). Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert’s law in conjunction with beam hardening and geometric corrections. The pixel rows were averaged and combined with information on the known distribution of suctions within the column to give 2048 point drying and wetting functions for each experiment. The point functions exhibited pronounced hysteresis and varied with column height, possibly due to differences in porosity caused by the packing procedure employed. Predicted point functions, extracted from the hanging water column volumetric data using the TrueCell inverse modeling procedure, showed very good agreement with the range of point functions measured within the column using neutron radiography. Extension of these experiments to 3-dimensions using neutron tomography is planned.

  14. Trajectories of Affective States in Adolescent Hockey Players: Turning Point and Motivational Antecedents

    ERIC Educational Resources Information Center

    Gaudreau, Patrick; Amiot, Catherine E.; Vallerand, Robert J.

    2009-01-01

    This study examined longitudinal trajectories of positive and negative affective states with a sample of 265 adolescent elite hockey players followed across 3 measurement points during the 1st 11 weeks of a season. Latent class growth modeling, incorporating a time-varying covariate and a series of predictors assessed at the onset of the season,…

  15. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series

    PubMed Central

    2011-01-01

    Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598

  16. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series.

    PubMed

    Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp

    2011-08-18

    Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.

  17. ICESAT Laser Altimeter Pointing, Ranging and Timing Calibration from Integrated Residual Analysis

    NASA Technical Reports Server (NTRS)

    Luthcke, Scott B.; Rowlands, D. D.; Carabajal, C. C.; Harding, D. H.; Bufton, J. L.; Williams, T. A.

    2003-01-01

    On January 12, 2003 the Ice, Cloud and land Elevation Satellite (ICESat) was successfully placed into orbit. The ICESat mission carries the Geoscience Laser Altimeter System (GLAS), which has a primary measurement of short-pulse laser- ranging to the Earth s surface at 1064nm wavelength at a rate of 40 pulses per second. The instrument has collected precise elevation measurements of the ice sheets, sea ice roughness and thickness, ocean and land surface elevations and surface reflectivity. The accurate geolocation of GLAS s surface returns, the spots from which the laser energy reflects on the Earth s surface, is a critical issue in the scientific application of these data. Pointing, ranging, timing and orbit errors must be compensated to accurately geolocate the laser altimeter surface returns. Towards this end, the laser range observations can be fully exploited in an integrated residual analysis to accurately calibrate these geolocation/instrument parameters. ICESat laser altimeter data have been simultaneously processed as direct altimetry from ocean sweeps along with dynamic crossovers in order to calibrate pointing, ranging and timing. The calibration methodology and current calibration results are discussed along with future efforts.

  18. Assessment of opacimeter calibration according to International Standard Organization 10155.

    PubMed

    Gomes, J F

    2001-01-01

    This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.

  19. Time loss injuries compromise team success in Elite Rugby Union: a 7-year prospective study.

    PubMed

    Williams, Sean; Trewartha, Grant; Kemp, Simon P T; Brooks, John H M; Fuller, Colin W; Taylor, Aileen E; Cross, Matthew J; Stokes, Keith A

    2016-06-01

    A negative association between injuries and team success has been demonstrated in professional football, but the nature of this association in elite Rugby Union teams is currently unclear. To assess the association between injury burden measures and team success outcomes within professional Rugby Union teams. A seven-season prospective cohort design was used to record all time-loss injuries incurred by English Premiership players. Associations between team success measures (league points tally and Eurorugby Club Ranking (ECR)) and injury measures (injury burden and injury days per team-match) were modelled, both within (changes from season to season) and between (differences averaged over all seasons) teams. Thresholds for the smallest worthwhile change in league points tally and ECR were 3 points and 2.6%, respectively. Data from a total of 1462 players within 15 Premiership teams were included in the analysis. We found clear negative associations between injury measures and team success (70-100% likelihood), with the exception of between-team differences for injury days per team-match and ECR, which was unclear. A reduction in injury burden of 42 days (90% CI 30 to 70) per 1000 player hours (22% of mean injury burden) was associated with the smallest worthwhile change in league points tally. Clear negative associations were found between injury measures and team success, and moderate reductions in injury burden may have worthwhile effects on competition outcomes for professional Rugby Union teams. These findings may be useful when communicating the value of injury prevention initiatives within this elite sport setting. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  20. Development of a short version of the modified Yale Preoperative Anxiety Scale.

    PubMed

    Jenkins, Brooke N; Fortier, Michelle A; Kaplan, Sherrie H; Mayes, Linda C; Kain, Zeev N

    2014-09-01

    The modified Yale Preoperative Anxiety Scale (mYPAS) is the current "criterion standard" for assessing child anxiety during induction of anesthesia and has been used in >100 studies. This observational instrument covers 5 items and is typically administered at 4 perioperative time points. Application of this complex instrument in busy operating room (OR) settings, however, presents a challenge. In this investigation, we examined whether the instrument could be modified and made easier to use in OR settings. This study used qualitative methods, principal component analyses, Cronbach αs, and effect sizes to create the mYPAS-Short Form (mYPAS-SF) and reduce time points of assessment. Data were obtained from multiple patients (N = 3798; Mage = 5.63) who were recruited in previous investigations using the mYPAS over the past 15 years. After qualitative analysis, the "use of parent" item was eliminated due to content overlap with other items. The reduced item set accounted for 82% or more of the variance in child anxiety and produced the Cronbach α of at least 0.92. To reduce the number of time points of assessment, a minimum Cohen d effect size criterion of 0.48 change in mYPAS score across time points was used. This led to eliminating the walk to the OR and entrance to the OR time points. Reducing the mYPAS to 4 items, creating the mYPAS-SF that can be administered at 2 time points, retained the accuracy of the measure while allowing the instrument to be more easily used in clinical research settings.

  1. Inequality measures for wealth distribution: Population vs individuals perspective

    NASA Astrophysics Data System (ADS)

    Pascoal, R.; Rocha, H.

    2018-02-01

    Economic inequality is, nowadays, frequently perceived as following a growing trend with impact on political and religious agendas. However, there is a wide range of inequality measures, each of which pointing to a possibly different degree of inequality. Furthermore, regardless of the measure used, it only acknowledges the momentary population inequality, failing to capture the individuals evolution over time. In this paper, several inequality measures were analyzed in order to compare the typical single time instant degree of wealth inequality (population perspective) to the one obtained from the individuals' wealth mean over several time instants (individuals perspective). The proposed generalization of a simple addictive model, for limited time average of individual's wealth, allows us to verify that the typically used inequality measures for a given snapshot instant of the population significantly overestimate the individuals' wealth inequality over time. Moreover, that is more extreme for the ratios than for the indices analyzed.

  2. Timing Affects Measurement of Portal Pressure Gradient After Placement of Transjugular Intrahepatic Portosystemic Shunts in Patients With Portal Hypertension.

    PubMed

    Silva-Junior, Gilberto; Turon, Fanny; Baiges, Anna; Cerda, Eira; García-Criado, Ángeles; Blasi, Annabel; Torres, Ferran; Hernandez-Gea, Virginia; Bosch, Jaume; Garcia-Pagan, Juan Carlos

    2017-05-01

    A reduction in portal pressure gradient (PPG) to <12 mm Hg after placement of a transjugular intrahepatic portosystemic shunt (TIPS) correlates with the absence of further bleeding or ascites at follow-up examinations of patients with cirrhosis. The PPG is usually measured immediately after placement of the TIPS, when different circumstances can affect PPG values, which could affect determination of risk for decompensation. We investigated variations in PPG measurements collected at different time points after TIPS, aiming to identify a time point after which PPG values were best maintained. We performed a retrospective study of 155 consecutive patients with severe complications of portal hypertension who received placement of TIPS from January 2008 through October 2015; patients were followed until March 2016. We compared PPG values measured at different time points and under different conditions: immediately after placement of TIPS (immediate PPG); at least 24 hours after placement to TIPS into hemodynamically stable patients, without sedation (early PPG); and again 1 month after TIPS placement (late PPG). The immediate PPG differed significantly from the early PPG, regardless of whether the TIPS was placed using general anesthesia (8.5 ± 3.5 mm Hg vs 10 ± 3.5 mm Hg; P = .015) or deep sedation (12 ± 4 mm Hg vs 10.5 ± 4 mm Hg; P <.001). In considering the 12 mm Hg threshold, concordance between immediate PPG and early PPG values was poor. However, there was no significant difference between mean early PPG and late PPG values (8.5 ± 2.5 mm Hg vs 8 ± 3 mm Hg), or between proportions of patients with early PPG vs late PPG values <12 mm Hg threshold. Maintenance of a PPG value <12 mm Hg during the follow-up period was associated with a lower risk of recurrent or de novo variceal bleeding or ascites (hazard ratio, 0.11; 95% confidence interval, 0.04 0.27; P < .001). In a retrospective study of patients with PPG values measured at different time points after TIPS placement, we found measurements of PPG in awake, hemodynamically stable patients at least 24 hours after TIPS to be the best maintained values. Our findings support the concept that PPG value <12 mm Hg after TIPS placement is associated with reduced risk of bleeding and ascites. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  3. Automatic computation of 2D cardiac measurements from B-mode echocardiography

    NASA Astrophysics Data System (ADS)

    Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin

    2012-03-01

    We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.

  4. The Feeding Practices and Structure Questionnaire (FPSQ-28): A parsimonious version validated for longitudinal use from 2 to 5 years.

    PubMed

    Jansen, Elena; Williams, Kate E; Mallan, Kimberley M; Nicholson, Jan M; Daniels, Lynne A

    2016-05-01

    Prospective studies and intervention evaluations that examine change over time assume that measurement tools measure the same construct at each occasion. In the area of parent-child feeding practices, longitudinal measurement properties of the questionnaires used are rarely verified. To ascertain that measured change in feeding practices reflects true change rather than change in the assessment, structure, or conceptualisation of the constructs over time, this study examined longitudinal measurement invariance of the Feeding Practices and Structure Questionnaire (FPSQ) subscales (9 constructs; 40 items) across 3 time points. Mothers participating in the NOURISH trial reported their feeding practices when children were aged 2, 3.7, and 5 years (N = 404). Confirmatory Factor Analysis (CFA) within a structural equation modelling framework was used. Comparisons of initial cross-sectional models followed by longitudinal modelling of subscales, resulted in the removal of 12 items, including two redundant or poorly performing subscales. The resulting 28-item FPSQ-28 comprised 7 multi-item subscales: Reward for Behaviour, Reward for Eating, Persuasive Feeding, Overt Restriction, Covert Restriction, Structured Meal Setting and Structured Meal Timing. All subscales showed good fit over 3 time points and each displayed at least partial scalar (thresholds equal) longitudinal measurement invariance. We recommend the use of a separate single item indicator to assess the family meal setting. This is the first study to examine longitudinal measurement invariance in a feeding practices questionnaire. Invariance was established, indicating that the subscales of the shortened FPSQ-28 can be used with mothers to validly assess change in 7 feeding constructs in samples of children aged 2-5 years of age. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Reliability of fitness tests using methods and time periods common in sport and occupational management.

    PubMed

    Burnstein, Bryan D; Steele, Russell J; Shrier, Ian

    2011-01-01

    Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Cohort study. Eighteen different Cirque du Soleil shows. Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators.

  6. Kepler Commissioning Data for Measurement of the Pixel Response Function and Focal Plane Geometry

    NASA Technical Reports Server (NTRS)

    Bryson, Stephen T.

    2017-01-01

    This document describes the Kepler PRF/FPG data release. This data was taken on April 27-29, 2009, during Kepler's commissioning phase in order to measure the pixel response function (PRF) (Bryson et al., 2010a) and focal plane geometry (FPG) (Tenenbaum and Jenkins, 2010). 33,424 stellar targets were observed for 243 long cadences, each with a duration of 14.7 minutes (half the duration of a normal Kepler long cadence). During these 243 cadences the Kepler photometer was moved, pointing in a dither pattern to facilitate PRF measurement. Motion occurred during the even cadences (second, fourth, etc.), with the telescope in stable fine point at each pointing in the dither pattern during the odd cadences (first, third, etc.). The first and last cadences were at the center of the dither pattern. Motion cadences are included in this release, but they do not contain any data. For details on how this data was used to derive the Kepler PRF and FPG models, see Bryson et al. (2010a) and Tenenbaum and Jenkins (2010). Descriptions of the PRF and FPG models are found in Thompson et al. (2016), x2.3.5.17 and x2.3.5.16 respectively. The data in this release can be used to recompute the Kepler PRF and FPG. Such a reconstruction, however, would not reflect measured seasonal changes in the PRF described in Van Cleve et al. (2016b), x5.2.The dither pattern is shown in Figure 1. The crosses show the commanded pointings and the circles show the measured pointings. Measured pointings are different from the commanded pointings due to the early state of calibration of the fine guidance sensors during commissioning (Van Cleve et al., 2016a). The measured offsets from the center of the pattern are given in RADEC offsets and pixel offsets in Table 1. The order of the offsets was randomized during data collection to avoid time-dependent systematics.

  7. Two Point Space-Time Correlation of Density Fluctuations Measured in High Velocity Free Jets

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta

    2006-01-01

    Two-point space-time correlations of air density fluctuations in unheated, fully-expanded free jets at Mach numbers M(sub j) = 0.95, 1.4, and 1.8 were measured using a Rayleigh scattering based diagnostic technique. The molecular scattered light from two small probe volumes of 1.03 mm length was measured for a completely non-intrusive means of determining the turbulent density fluctuations. The time series of density fluctuations were analyzed to estimate the integral length scale L in a moving frame of reference and the convective Mach number M(sub c) at different narrow Strouhal frequency (St) bands. It was observed that M(sub c) and the normalized moving frame length scale L*St/D, where D is the jet diameter, increased with Strouhal frequency before leveling off at the highest resolved frequency. Significant differences were observed between data obtained from the lip shear layer and the centerline of the jet. The wave number frequency transform of the correlation data demonstrated progressive increase in the radiative part of turbulence fluctuations with increasing jet Mach number.

  8. Conceptual recurrence plots: revealing patterns in human discourse.

    PubMed

    Angus, Daniel; Smith, Andrew; Wiles, Janet

    2012-06-01

    Human discourse contains a rich mixture of conceptual information. Visualization of the global and local patterns within this data stream is a complex and challenging problem. Recurrence plots are an information visualization technique that can reveal trends and features in complex time series data. The recurrence plot technique works by measuring the similarity of points in a time series to all other points in the same time series and plotting the results in two dimensions. Previous studies have applied recurrence plotting techniques to textual data; however, these approaches plot recurrence using term-based similarity rather than conceptual similarity of the text. We introduce conceptual recurrence plots, which use a model of language to measure similarity between pairs of text utterances, and the similarity of all utterances is measured and displayed. In this paper, we explore how the descriptive power of the recurrence plotting technique can be used to discover patterns of interaction across a series of conversation transcripts. The results suggest that the conceptual recurrence plotting technique is a useful tool for exploring the structure of human discourse.

  9. Exploring the Impact of Work Experience on Part-Time Students' Academic Success in Malaysian Polytechnics

    ERIC Educational Resources Information Center

    Ibrahim, Norhayati; Freeman, Steven A.; Shelley, Mack C.

    2012-01-01

    The study explored the influence of work experience on adult part-time students' academic success as defined by their cumulative grade point average. The sample consisted of 614 part-time students from four polytechnic institutions in Malaysia. The study identified six factors to measure the perceived influence of work experiences--positive…

  10. Longitudinal Indicators of the Social Context of Families: Beyond the Snapshot

    ERIC Educational Resources Information Center

    Moore, Kristin Anderson; Vandivere, Sharon

    2007-01-01

    Longitudinal indicators are measures of an individual or family behavior, interaction, attitude, or value that are assessed consistently or comparably across multiple points in time and cumulated over time. Examples include the percentage of time a family lived in poverty or the proportion of childhood a person lived in a single-parent family.…

  11. Attitude control system of the Delfi-n3Xt satellite

    NASA Astrophysics Data System (ADS)

    Reijneveld, J.; Choukroun, D.

    2013-12-01

    This work is concerned with the development of the attitude control algorithms that will be implemented on board of the Delfi-n3xt nanosatellite, which is to be launched in 2013. One of the mission objectives is to demonstrate Sun pointing and three axis stabilization. The attitude control modes and the associated algorithms are described. The control authority is shared between three body-mounted magnetorquers (MTQ) and three orthogonal reaction wheels. The attitude information is retrieved from Sun vector measurements, Earth magnetic field measurements, and gyro measurements. The design of the control is achieved as a trade between simplicity and performance. Stabilization and Sun pointing are achieved via the successive application of the classical Bdot control law and a quaternion feedback control. For the purpose of Sun pointing, a simple quaternion estimation scheme is implemented based on geometric arguments, where the need for a costly optimal filtering algorithm is alleviated, and a single line of sight (LoS) measurement is required - here the Sun vector. Beyond the three-axis Sun pointing mode, spinning Sun pointing modes are also described and used as demonstration modes. The three-axis Sun pointing mode requires reaction wheels and magnetic control while the spinning control modes are implemented with magnetic control only. In addition, a simple scheme for angular rates estimation using Sun vector and Earth magnetic measurements is tested in the case of gyro failures. The various control modes performances are illustrated via extensive simulations over several orbits time spans. The simulated models of the dynamical space environment, of the attitude hardware, and the onboard controller logic are using realistic assumptions. All control modes satisfy the minimal Sun pointing requirements allowed for power generation.

  12. Limiting Magnitude, τ, t eff, and Image Quality in DES Year 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    H. Neilsen, Jr.; Bernstein, Gary; Gruendl, Robert

    The Dark Energy Survey (DES) is an astronomical imaging survey being completed with the DECam imager on the Blanco telescope at CTIO. After each night of observing, the DES data management (DM) group performs an initial processing of that night's data, and uses the results to determine which exposures are of acceptable quality, and which need to be repeated. The primary measure by which we declare an image of acceptable quality ismore » $$\\tau$$, a scaling of the exposure time. This is the scale factor that needs to be applied to the open shutter time to reach the same photometric signal to noise ratio for faint point sources under a set of canonical good conditions. These conditions are defined to be seeing resulting in a PSF full width at half maximum (FWHM) of 0.9" and a pre-defined sky brightness which approximates the zenith sky brightness under fully dark conditions. Point source limiting magnitude and signal to noise should therefore vary with t in the same way they vary with exposure time. Measurements of point sources and $$\\tau$$ in the first year of DES data confirm that they do. In the context of DES, the symbol $$t_{eff}$$ and the expression "effective exposure time" usually refer to the scaling factor, $$\\tau$$, rather than the actual effective exposure time; the "effective exposure time" in this case refers to the effective duration of one second, rather than the effective duration of an exposure.« less

  13. Classical evolution of fractal measures on the lattice

    NASA Astrophysics Data System (ADS)

    Antoniou, N. G.; Diakonos, F. K.; Saridakis, E. N.; Tsolias, G. A.

    2007-04-01

    We consider the classical evolution of a lattice of nonlinear coupled oscillators for a special case of initial conditions resembling the equilibrium state of a macroscopic thermal system at the critical point. The displacements of the oscillators define initially a fractal measure on the lattice associated with the scaling properties of the order parameter fluctuations in the corresponding critical system. Assuming a sudden symmetry breaking (quench), leading to a change in the equilibrium position of each oscillator, we investigate in some detail the deformation of the initial fractal geometry as time evolves. In particular, we show that traces of the critical fractal measure can be sustained for large times, and we extract the properties of the chain that determine the associated time scales. Our analysis applies generally to critical systems for which, after a slow developing phase where equilibrium conditions are justified, a rapid evolution, induced by a sudden symmetry breaking, emerges on time scales much shorter than the corresponding relaxation or observation time. In particular, it can be used in the fireball evolution in a heavy-ion collision experiment, where the QCD critical point emerges, or in the study of evolving fractals of astrophysical and cosmological scales, and may lead to determination of the initial critical properties of the Universe through observations in the symmetry-broken phase.

  14. Cerebral oxygen saturation and cardiac output during anaesthesia in sitting position for neurosurgical procedures: a prospective observational study.

    PubMed

    Schramm, P; Tzanova, I; Hagen, F; Berres, M; Closhen, D; Pestel, G; Engelhard, K

    2016-10-01

    Neurosurgical operations in the dorsal cranium often require the patient to be positioned in a sitting position. This can be associated with decreased cardiac output and cerebral hypoperfusion, and possibly, inadequate cerebral oxygenation. In the present study, cerebral oxygen saturation was measured during neurosurgery in the sitting position and correlated with cardiac output. Perioperative cerebral oxygen saturation was measured continuously with two different monitors, INVOS ® and FORE-SIGHT ® . Cardiac output was measured at eight predefined time points using transoesophageal echocardiography. Forty patients were enrolled, but only 35 (20 female) were eventually operated on in the sitting position. At the first time point, the regional cerebral oxygen saturation measured with INVOS ® was 70 (sd 9)%; thereafter, it increased by 0.0187% min -1 (P<0.01). The cerebral tissue oxygen saturation measured with FORE-SIGHT ® started at 68 (sd 13)% and increased by 0.0142% min -1 (P<0.01). The mean arterial blood pressure did not change. Cardiac output was between 6.3 (sd 1.3) and 7.2 (1.8) litre min -1 at the predefined time points. Cardiac output, but not mean arterial blood pressure, showed a positive and significant correlation with cerebral oxygen saturation. During neurosurgery in the sitting position, the cerebral oxygen saturation slowly increases and, therefore, this position seems to be safe with regard to cerebral oxygen saturation. Cerebral oxygen saturation is stable because of constant CO and MAP, while the influence of CO on cerebral oxygen saturation seems to be more relevant. NCT01275898. © The Author 2016. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Job Demands, Burnout, and Teamwork in Healthcare Professionals Working in a General Hospital that Was Analysed At Two Points in Time

    PubMed Central

    Mijakoski, Dragan; Karadzhinska-Bislimovska, Jovanka; Stoleski, Sasho; Minov, Jordan; Atanasovska, Aneta; Bihorac, Elida

    2018-01-01

    AIM: The purpose of the paper was to assess job demands, burnout, and teamwork in healthcare professionals (HPs) working in a general hospital that was analysed at two points in time with a time lag of three years. METHODS: Time 1 respondents (N = 325) were HPs who participated during the first wave of data collection (2011). Time 2 respondents (N = 197) were HPs from the same hospital who responded at Time 2 (2014). Job demands, burnout, and teamwork were measured with Hospital Experience Scale, Maslach Burnout Inventory, and Hospital Survey on Patient Safety Culture, respectively. RESULTS: Significantly higher scores of emotional exhaustion (21.03 vs. 15.37, t = 5.1, p < 0.001), depersonalization (4.48 vs. 2.75, t = 3.8, p < 0.001), as well as organizational (2.51 vs. 2.34, t = 2.38, p = 0.017), emotional (2.46 vs. 2.25, t = 3.68, p < 0.001), and cognitive (2.82 vs. 2.64, t = 2.68, p = 0.008) job demands were found at Time 2. Teamwork levels were similar at both points in time (Time 1 = 3.84 vs. Time 2 = 3.84, t = 0.043, p = 0.97). CONCLUSION: Actual longitudinal study revealed significantly higher mean values of emotional exhaustion and depersonalization in 2014 that could be explained by significantly increased job demands between analysed points in time. PMID:29731948

  16. Time-Distance Helioseismology with the MDI Instrument: Initial Results

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.; Kosovichev, A. G.; Scherrer, P. H.; Bogart, R. S.; Bush, R. I.; DeForest, C.; Hoeksema, J. T.; Schou, J.; Saba, J. L. R.; Tarbell, T. D.; hide

    1997-01-01

    In time-distance helioseismology, the travel time of acoustic waves is measured between various points on the solar surface. To some approximation, the waves can be considered to follow ray paths that depend only on a mean solar model, with the curvature of the ray paths being caused by the increasing sound speed with depth below the surface. The travel time is effected by various inhomogeneities along the ray path, including flows, temperature inhomogeneities, and magnetic fields. By measuring a large number of times between different locations and using an inversion method, it is possible to construct 3-dimensional maps of the subsurface inhomogeneities. The SOI/MDI experiment on SOHO has several unique capabilities for time-distance helioseismology. The great stability of the images observed without benefit of an intervening atmosphere is quite striking. It his made it possible for us to detect the travel time fo separations of points as small as 2.4 Mm in the high-resolution mode of MDI (0.6 arc sec 1/pixel). This has enabled the detection of the supergranulation flow. Coupled with the inversion technique, we can now study the 3-dimensional evolution of the flows near the solar surface.

  17. No scanning depth imaging system based on TOF

    NASA Astrophysics Data System (ADS)

    Sun, Rongchun; Piao, Yan; Wang, Yu; Liu, Shuo

    2016-03-01

    To quickly obtain a 3D model of real world objects, multi-point ranging is very important. However, the traditional measuring method usually adopts the principle of point by point or line by line measurement, which is too slow and of poor efficiency. In the paper, a no scanning depth imaging system based on TOF (time of flight) was proposed. The system is composed of light source circuit, special infrared image sensor module, processor and controller of image data, data cache circuit, communication circuit, and so on. According to the working principle of the TOF measurement, image sequence was collected by the high-speed CMOS sensor, and the distance information was obtained by identifying phase difference, and the amplitude image was also calculated. Experiments were conducted and the experimental results show that the depth imaging system can achieve no scanning depth imaging function with good performance.

  18. EAST kinetic equilibrium reconstruction combining with Polarimeter-Interferometer internal measurement constraints

    NASA Astrophysics Data System (ADS)

    Lian, H.; Liu, H. Q.; Li, K.; Zou, Z. Y.; Qian, J. P.; Wu, M. Q.; Li, G. Q.; Zeng, L.; Zang, Q.; Lv, B.; Jie, Y. X.; EAST Team

    2017-12-01

    Plasma equilibrium reconstruction plays an important role in the tokamak plasma research. With a high temporal and spatial resolution, the POlarimeter-INTerferometer (POINT) system on EAST has provided effective measurements for 102s H-mode operation. Based on internal Faraday rotation measurements provided by the POINT system, the equilibrium reconstruction with a more accurate core current profile constraint has been demonstrated successfully on EAST. Combining other experimental diagnostics and external magnetic fields measurement, the kinetic equilibrium has also been reconstructed on EAST. Take the pressure and edge current information from kinetic EFIT into the equilibrium reconstruction with Faraday rotation constraint, the new equilibrium reconstruction not only provides a more accurate internal current profile but also contains edge current and pressure information. One time slice result using new kinetic equilibrium reconstruction with POINT data constraints is demonstrated in this paper and the result shows there is a reversed shear of q profile and the pressure profile is also contained. The new improved equilibrium reconstruction is greatly helpful to the future theoretical analysis.

  19. Developing a weighted measure of speech sound accuracy.

    PubMed

    Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J

    2011-02-01

    To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.

  20. [Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].

    PubMed

    Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao

    2015-05-01

    Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are selected for characteristic peak power integration to determine components content of mineral oil mixture of gasoline, kerosene and diesel by optimal algorithm. Compared with single point measurement of peak method and mean method, measurement sensitivity is improved about 50 times. The implementation of high precision measurement of mixture components content of gasoline, kerosene and diesel provides a practical algorithm for components content direct determination of spectra overlapping mixture without chemical separation.

  1. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.

  2. QUANTIFYING UNCERTAINTY IN NET PRIMARY PRODUCTION MEASUREMENTS

    EPA Science Inventory

    Net primary production (NPP, e.g., g m-2 yr-1), a key ecosystem attribute, is estimated from a combination of other variables, e.g. standing crop biomass at several points in time, each of which is subject to errors in their measurement. These errors propagate as the variables a...

  3. A physical anthropomorphic phantom of a one year old child with real-time dosimetry

    NASA Astrophysics Data System (ADS)

    Bower, Mark William

    A physical heterogeneous phantom has been created with epoxy resin based tissue substitutes. The phantom is based on the Cristy and Eckerman mathematical phantom which in turn is a modification of the Medical Internal Radiation Dose (MIRD) model of a one-year-old child as presented by the Society of Nuclear Medicine. The Cristy and Eckerman mathematical phantom, and the physical phantom, are comprised of three different tissue types: bone, lung tissue and soft tissue. The bone tissue substitute is a homogenous mixture of bone tissues: active marrow, inactive marrow, trabecular bone, and cortical bone. Soft tissue organs are represented by a homogeneous soft tissue substitute at a particular location. Point doses were measured within the phantom with a Metal Oxide Semiconductor Field Effect Transistor (MOSFET)- based Patient Dose Verification System modified from the original radiotherapy application. The system features multiple dosimeters that are used to monitor entrance or exit skin doses and intracavity doses in the phantom in real-time. Two different MOSFET devices were evaluated: the typical therapy MOSFET and a developmental MOSFET device that has an oxide layer twice as thick as the therapy MOSFET thus making it of higher sensitivity. The average sensitivity (free-in-air, including backscatter) of the 'high-sensitivity' MOSFET dosimeters ranged from 1.15×105 mV per C kg-1 (29.7 mV/R) to 1.38×105 mV per C kg-1 (35.7 mV/R) depending on the energy of the x-ray field. The integrated physical phantom was utilized to obtain point measurements of the absorbed dose from diagnostic x-ray examinations. Organ doses were calculated based on these point dose measurements. The phantom dosimetry system functioned well providing real-time measurement of the dose to particular organs. The system was less reliable at low doses where the main contribution to the dose was from scattered radiation. The system also was of limited utility for determining the absorbed dose in larger systems such as the skeleton. The point dose method of estimating the organ dose to large disperse organs such as this are of questionable accuracy since only a limited number of points are measured in a field with potentially large exposure variations. The MOSFET system was simple to use and considerably faster than traditional thermoluminescent dosimetry. The one-year-old simulated phantom with the real-time MOSFET dosimeters provides a method to easily evaluate the risk to a previously understudied population from diagnostic radiographic procedures.

  4. Time of travel of solutes in selected reaches of the Sandusky River Basin, Ohio, 1972 and 1973

    USGS Publications Warehouse

    Westfall, Arthur O.

    1976-01-01

    A time of travel study of a 106-mile (171-kilometer) reach of the Sandusky River and a 39-mile (63-kilometer) reach of Tymochtee Creek was made to determine the time required for water released from Killdeer Reservoir on Tymochtee Creek to reach selected downstream points. In general, two dye sample runs were made through each subreach to define the time-discharge relation for approximating travel times at selected discharges within the measured range, and time-discharge graphs are presented for 38 subreaches. Graphs of dye dispersion and variation in relation to time are given for three selected sampling sites. For estimating travel time and velocities between points in the study reach, tables for selected flow durations are given. Duration curves of daily discharge for four index stations are presented to indicate the lo-flow characteristics and for use in shaping downward extensions of the time-discharge curves.

  5. The role of P2X3 receptors in bilateral masseter muscle allodynia in rats

    PubMed Central

    Tariba Knežević, Petra; Vukman, Robert; Antonić, Robert; Kovač, Zoran; Uhač, Ivone; Simonić-Kocijan, Sunčana

    2016-01-01

    Aim To determine the relationship between bilateral allodynia induced by masseter muscle inflammation and P2X3 receptor expression changes in trigeminal ganglia (TRG) and the influence of intramasseteric P2X3 antagonist administration on bilateral masseter allodynia. Methods To induce bilateral allodynia, rats received a unilateral injection of complete Freund’s adjuvant (CFA) into the masseter muscle. Bilateral head withdrawal threshold (HWT) was measured 4 days later. Behavioral measurements were followed by bilateral masseter muscle and TRG dissection. Masseter tissue was evaluated histopathologically and TRG tissue was analyzed for P2X3 receptor mRNA expression by using quantitative real-time polymerase chain reaction (PCR) analysis. To assess the P2X3 receptor involvement in nocifensive behavior, two doses (6 and 60 μg/50 μL) of selective P2X3 antagonist A-317491 were administrated into the inflamed masseter muscle 4 days after the CFA injection. Bilateral HWT was measured at 15-, 30-, 60-, and 120-minute time points after A-317491 administration. Results HWT was bilaterally reduced after the CFA injection (P < 0.001). Intramasseteric inflammation was confirmed ipsilaterally to the CFA injection. Quantitative real-time PCR analysis demonstrated enhanced P2X3 expression in TRG ipsilaterally to CFA administration (P < 0.01). In comparison with controls, the dose of 6 μg of A-317491 significantly increased bilateral HWT at 15-, 30-, and 60-minute time points after the A-317491 administration (P < 0.001), whereas the dose of 60 μg of A-317491 was efficient at all time points ipsilaterally (P = 0.004) and at 15-, 30-, and 60-minute time points contralaterally (P < 0.001). Conclusion Unilateral masseter inflammation can induce bilateral allodynia in rats. The study provided evidence that P2X3 receptors can functionally influence masseter muscle allodynia and suggested that P2X3 receptors expressed in TRG neurons are involved in masseter inflammatory pain conditions. PMID:28051277

  6. Longitudinal validity and responsiveness of the Food Allergy Quality of Life Questionnaire - Parent Form in children 0-12 years following positive and negative food challenges.

    PubMed

    DunnGalvin, A; Cullinane, C; Daly, D A; Flokstra-de Blok, B M J; Dubois, A E J; Hourihane, J O'B

    2010-03-01

    There are no published studies of longitudinal health-related quality of life (HRQL) assessments of food-allergic children using a disease-specific measure. This study assessed the longitudinal measurement properties of the Food Allergy Quality of Life Questionnaire - Parent Form (FAQLQ-PF) in a sample of children undergoing food challenge. Parents of children 0-12 years completed the FAQLQ-PF and the Food Allergy Independent Measure (FAIM) pre-challenge and at 2 and 6 months post food challenge. In order to evaluate longitudinal validity, differences between Group A (positive challenge) and Group B (negative challenge) were expected over time. We computed correlation coefficients between change scores in the FAQLQ-PF and change scores in the FAIM. To determine the minimally important difference (MID), we used distributional criterion and effect size approaches. A logistic regression model profiled those children falling below this point. Eighty-two children underwent a challenge (42 positive; 40 negative). Domains and total score improved significantly at pos-challenge time-points for both groups (all P<0.05). Sensitivity was demonstrated by significant differences between positive and negative groups at 6 months [F(2, 59)=6.221, P<0.003] and by differing improvement on relevant subscales (P<0.05). MID was 0.45 on a seven-point response scale. Poorer quality of life at baseline increased the odds by over 2.0 of no improvement in HRQL scores 6-month time-point. General maternal health (OR 1.252), number of foods avoided (OR 1.369) and children >9 years (OR 1.173) were also predictors. The model correctly identified 84% of cases below MID. The FAQLQ-PF is sensitive to change, and has excellent longitudinal reliability and validity in a food-allergic patient population. The standard error of measurement value of 0.5 points as a threshold for meaningful change in HRQL questionnaires was confirmed. The FAQLQ-PF may be used to identify problems in children, to assess the effectiveness of clinical trials or interventions, and to guide the development of regulatory policies.

  7. Physical Activity, Television Viewing Time, and 12-Year Changes in Waist Circumference

    PubMed Central

    SHIBATA, AI; OKA, KOICHIRO; SUGIYAMA, TAKEMI; SALMON, JO; DUNSTAN, DAVID W.; OWEN, NEVILLE

    2016-01-01

    ABSTRACT Purpose Both moderate-to-vigorous physical activity (MVPA) and sedentary behavior can be associated with adult adiposity. Much of the relevant evidence is from cross-sectional studies or from prospective studies with relevant exposure measures at a single time point before weight gain or incident obesity. This study examined whether changes in MVPA and television (TV) viewing time are associated with subsequent changes in waist circumference, using data from three separate observation points in a large population-based prospective study of Australian adults. Methods Data were obtained from the Australian Diabetes, Obesity, and Lifestyle study collected in 1999–2000 (baseline), 2004–2005 (wave 2), and 2011–2012 (wave 3). The study sample consisted of adults age 25 to 74 yr at baseline who also attended site measurement at three time points (n = 3261). Multilevel linear regression analysis examined associations of initial 5-yr changes in MVPA and TV viewing time (from baseline to wave 2) with 12-yr change in waist circumference (from baseline to wave 3), adjusting for well-known confounders. Results As categorical predictors, increases in MVPA significantly attenuated increases in waist circumference (P for trend < 0.001). TV viewing time change was not significantly associated with changes in waist circumference (P for trend = 0.06). Combined categories of MVPA and TV viewing time changes were predictive of waist circumference increases; compared with those who increased MVPA and reduced TV viewing time, those who reduced MVPA and increased TV viewing time had a 2-cm greater increase in waist circumference (P = 0.001). Conclusion Decreasing MVPA emerged as a significant predictor of increases in waist circumference. Increasing TV viewing time was also influential, but its impact was much weaker than MVPA. PMID:26501231

  8. [Proposal of a costing method for the provision of sterilization in a public hospital].

    PubMed

    Bauler, S; Combe, C; Piallat, M; Laurencin, C; Hida, H

    2011-07-01

    To refine the billing to institutions whose operations of sterilization are outsourced, a sterilization cost approach was developed. The aim of the study is to determine the value of a sterilization unit (one point "S") evolving according to investments, quantities processed, types of instrumentation or packaging. The time of preparation has been selected from all sub-processes of sterilization to determine the value of one point S. The time of preparation of sterilized large and small containers and pouches were raised. The reference time corresponds to one bag (equal to one point S). Simultaneously, the annual operating cost of sterilization was defined and divided into several areas of expenditure: employees, equipments and building depreciation, supplies, and maintenance. A total of 136 crossing times of containers were measured. Time to prepare a pouch has been estimated at one minute (one S). A small container represents four S and a large container represents 10S. By dividing the operating cost of sterilization by the total number of points of sterilization over a given period, the cost of one S can be determined. This method differs from traditional costing method in sterilizing services, considering each item of expenditure. This point S will be the base for billing of subcontracts to other institutions. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  9. A study of the relative responsiveness of five sensibility tests for assessment of recovery after median nerve injury and repair.

    PubMed

    Jerosch-Herold, Christina

    2003-06-01

    A longitudinal dynamic cohort study was conducted on patients with median nerve injuries to evaluate the relative responsiveness of five sensibility tests: touch threshold using the WEST (monofilaments), static two-point discrimination, locognosia, a pick-up test and an object recognition test. Repeated assessments were performed starting at 6 months after surgery. In order to compare the relative responsiveness of each test, effect size and the standard response mean were calculated for sensibility changes occurring between 6 and 18 months after repair. Large effect sizes (>0.8) and standard response means (>0.8) were obtained for the WEST, locognosia, pick-up and object recognition tests. Two-point discrimination was hardly measurable at any time point and exhibited strong flooring effects. Further analysis of all time points was undertaken to assess the strength of the monotonic relationship between test scores and time elapsed since surgery. Comparison of monotonicity between the five tests indicated that the WEST performed best, whereas two-point discrimination performed worst. These results suggest that the monofilament test (WEST), locognosia test, Moberg pick-up test and tactile gnosis test capture sensibility changes over time well and should be considered for inclusion in the outcome assessment of patients with median nerve injury.

  10. Speed of recovery after arthroscopic rotator cuff repair.

    PubMed

    Kurowicki, Jennifer; Berglund, Derek D; Momoh, Enesi; Disla, Shanell; Horn, Brandon; Giveans, M Russell; Levy, Jonathan C

    2017-07-01

    The purpose of this study was to delineate the time taken to achieve maximum improvement (plateau of recovery) and the degree of recovery observed at various time points (speed of recovery) for pain and function after arthroscopic rotator cuff repair. An institutional shoulder surgery registry query identified 627 patients who underwent arthroscopic rotator cuff repair between 2006 and 2015. Measured range of motion, patient satisfaction, and patient-reported outcome measures were analyzed for preoperative, 3-month, 6-month, 1-year, and 2-year intervals. Subgroup analysis was performed on the basis of tear size by retraction grade and number of anchors used. As an entire group, the plateau of maximum recovery for pain, function, and motion occurred at 1 year. Satisfaction with surgery was >96% at all time points. At 3 months, 74% of improvement in pain and 45% to 58% of functional improvement were realized. However, only 22% of elevation improvement was achieved (P < .001). At 6 months, 89% of improvement in pain, 81% to 88% of functional improvement, and 78% of elevation improvement were achieved (P < .001). Larger tears had a slower speed of recovery for Single Assessment Numeric Evaluation scores, forward elevation, and external rotation. Smaller tears had higher motion and functional scores across all time points. Tear size did not influence pain levels. The plateau of maximum recovery after rotator cuff repair occurred at 1 year with high satisfaction rates at all time points. At 3 months, approximately 75% of pain relief and 50% of functional recovery can be expected. Larger tears have a slower speed of recovery. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  11. Impaired Patient-Reported Outcomes Predict Poor School Functioning and Daytime Sleepiness: The PROMIS Pediatric Asthma Study.

    PubMed

    Jones, Conor M; DeWalt, Darren A; Huang, I-Chan

    Poor asthma control in children is related to impaired patient-reported outcomes (PROs; eg, fatigue, depressive symptoms, anxiety), but less well studied is the effect of PROs on children's school performance and sleep outcomes. In this study we investigated whether the consistency status of PROs over time affected school functioning and daytime sleepiness in children with asthma. Of the 238 children with asthma enrolled in the Patient-Reported Outcomes Measurement Information System (PROMIS) Pediatric Asthma Study, 169 children who provided survey data for all 4 time points were used in the analysis. The child's PROs, school functioning, and daytime sleepiness were measured 4 times within a 15-month period. PRO domains included asthma impact, pain interference, fatigue, depressive symptoms, anxiety, and mobility. Each child was classified as having poor/fair versus good PROs per meaningful cut points. The consistency status of each domain was classified as consistently poor/fair if poor/fair status was present for at least 3 time points; otherwise, the status was classified as consistently good. Seemingly unrelated regression was performed to test if consistently poor/fair PROs predicted impaired school functioning and daytime sleepiness at the fourth time point. Consistently poor/fair in all PRO domains was significantly associated with impaired school functioning and excessive daytime sleepiness (Ps < .01) after controlling for the influence of the child's age, sex, and race/ethnicity. Children with asthma with consistently poor/fair PROs are at risk of poor school functioning and daytime sleepiness. Developing child-friendly PRO assessment systems to track PROs can inform potential problems in the school setting. Copyright © 2017 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  12. New techniques to measure cliff change from historical oblique aerial photographs and structure-from-motion photogrammetry

    USGS Publications Warehouse

    Warrick, Jonathan; Ritchie, Andy; Adelman, Gabrielle; Adelman, Ken; Limber, Patrick W.

    2017-01-01

    Oblique aerial photograph surveys are commonly used to document coastal landscapes. Here it is shown that adequate overlap may exist in these photographic records to develop topographic models with Structure-from-Motion (SfM) photogrammetric techniques. Using photographs of Fort Funston, California, from the California Coastal Records Project, imagery were combined with ground control points in a four-dimensional analysis that produced topographic point clouds of the study area’s cliffs for 5 years spanning 2002 to 2010. Uncertainty was assessed by comparing point clouds with airborne LIDAR data, and these uncertainties were related to the number and spatial distribution of ground control points used in the SfM analyses. With six or more ground control points, the root mean squared errors between the SfM and LIDAR data were less than 0.30 m (minimum 1⁄4 0.18 m), and the mean systematic error was less than 0.10 m. The SfM results had several benefits over traditional airborne LIDAR in that they included point coverage on vertical- to-overhanging sections of the cliff and resulted in 10–100 times greater point densities. Time series of the SfM results revealed topographic changes, including landslides, rock falls, and the erosion of landslide talus along the Fort Funston beach. Thus, it was concluded that SfM photogrammetric techniques with historical oblique photographs allow for the extraction of useful quantitative information for mapping coastal topography and measuring coastal change. The new techniques presented here are likely applicable to many photograph collections and problems in the earth sciences.

  13. Addition of sildenafil to long-term intravenous epoprostenol therapy in patients with pulmonary arterial hypertension: a randomized trial.

    PubMed

    Simonneau, Gérald; Rubin, Lewis J; Galiè, Nazzareno; Barst, Robyn J; Fleming, Thomas R; Frost, Adaani E; Engel, Peter J; Kramer, Mordechai R; Burgess, Gary; Collings, Lorraine; Cossons, Nandini; Sitbon, Olivier; Badesch, David B

    2008-10-21

    Oral sildenafil and intravenous epoprostenol have independently been shown to be effective in patients with pulmonary arterial hypertension. To investigate the effect of adding oral sildenafil to long-term intravenous epoprostenol in patients with pulmonary arterial hypertension. A 16-week, double-blind, placebo-controlled, parallel-group study. Multinational study at 41 centers in 11 countries from 3 July 2003 to 27 January 2006. 267 patients with pulmonary arterial hypertension (idiopathic, associated anorexigen use or connective tissue disease, or corrected congenital heart disease) who were receiving long-term intravenous epoprostenol therapy. Patients were randomly assigned to receive placebo or sildenafil, 20 mg three times daily, titrated to 40 mg and 80 mg three times daily, as tolerated, at 4-week intervals. Of 265 patients who received treatment, 256 (97%) patients (123 in the placebo group and 133 in the sildenafil group) completed the study. Change from baseline in exercise capacity measured by 6-minute walk distance (primary end point) and hemodynamic measurements, time to clinical worsening, and Borg dyspnea score (secondary end points). A placebo-adjusted increase of 28.8 meters (95% CI, 13.9 to 43.8 meters) in the 6-minute walk distance occurred in patients in the sildenafil group; these improvements were most prominent among patients with baseline distances of 325 meters or more. Relative to epoprostenol monotherapy, addition of sildenafil resulted in a greater change in mean pulmonary arterial pressure by -3.8 mm Hg (CI, -5.6 to -2.1 mm Hg); cardiac output by 0.9 L/min (CI, 0.5 to 1.2 L/min); and longer time to clinical worsening, with a smaller proportion of patients experiencing a worsening event in the sildenafil group (0.062) than in the placebo group (0.195) by week 16 (P = 0.002). Health-related quality of life also improved in patients who received combined therapy compared with those who received epoprostenol monotherapy. There was no effect on the Borg dyspnea score. Of the side effects generally associated with sildenafil treatment, the most commonly reported in the placebo and sildenafil groups, respectively, were headache (34% and 57%; difference, 23 percentage points [CI, 12 to 35 percentage points]), dyspepsia (2% and 16%; difference, 13 percentage points [CI, 7 to 20 percentage points]), pain in extremity (18% and 25%; difference, 8 percentage points [CI, -2 to 18 percentage points]), and nausea (18% and 25%; difference, 8 percentage points [CI, -2 to 18 percentage points]). The study excluded patients with pulmonary arterial hypertension associated with other causes. There was an imbalance in missing data between groups, with 8 placebo recipients having no postbaseline walk assessment compared with 1 sildenafil recipient. These patients were excluded from the analysis. In some patients with pulmonary arterial hypertension, the addition of sildenafil to long-term intravenous epoprostenol therapy improves exercise capacity, hemodynamic measurements, time to clinical worsening, and quality of life, but not Borg dyspnea score. Increased rates of headache and dyspepsia occurred with the addition of sildenafil.

  14. The Academic RVU: Ten Years Developing a Metric for and Financially Incenting Academic Productivity at Oregon Health & Science University.

    PubMed

    Ma, O John; Hedges, Jerris R; Newgard, Craig D

    2017-08-01

    Established metrics reward academic faculty for clinical productivity. Few data have analyzed a bonus model to measure and reward academic productivity. This study's objective was to describe development and use of a departmental academic bonus system for incenting faculty scholarly and educational productivity. This cross-sectional study analyzed a departmental bonus system among emergency medicine academic faculty at Oregon Health & Science University, including growth from 2005 to 2015. All faculty members with a primary appointment were eligible for participation. Each activity was awarded points based on a predetermined education or scholarly point scale. Faculty members accumulated points based on their activity (numerator), and the cumulative points of all faculty were the denominator. Variables were individual faculty member (deidentified), academic year, bonus system points, bonus amounts awarded, and measures of academic productivity. Data were analyzed using descriptive statistics, including measures of variance. The total annual financial bonus pool ranged from $211,622 to $274,706. The median annual per faculty academic bonus remained fairly constant over time ($3,980 in 2005-2006 vs. $4,293 in 2014-2015), with most change at the upper quartile of academic bonus (max bonus $16,920 in 2005-2006 vs. $39,207 in 2014-2015). Bonuses rose linearly among faculty in the bottom three quartiles of academic productivity, but increased exponentially in the 75th to 100th percentile. Faculty academic productivity can be measured and financially rewarded according to an objective academic bonus system. The "academic point" used to measure productivity functions as an "academic relative value unit."

  15. FAMILIARITY TRANSFER AS AN EXPLANATION OF THE DÉJÀ VU EFFECT.

    PubMed

    Małecki, M

    2015-06-01

    Déjà vu is often explained in terms of an unconscious transfer of familiarity between a familiar object or objects and accompanying new objects. However, empirical research tests more the priming effectiveness than such a transfer. This paper reviews the main explanations of déjà vu, proposes a cognitive model of the phenomenon, and tests its six major assumptions. The model states that a sense of familiarity can be felt toward an objectively new stimulus (point 1) and that it can be transferred from a known stimulus to a novel one (point 2) in a situation where the person is unaware of such a transfer (point 3). The criteria for déjà vu are that the known and the novel stimuli may have graphical or semantic similarity, but differences exclude priming explanations (point 4); the familiarity measure should be of an non-rational nature (sense of familiarity rather than recognition; point 5); and that the feeling of familiarity toward a novel stimuli produces a conflict, which could be measured by means of increased reaction (point 6). 119 participants were tested in three experiments. The participants were to assess the novel stimuli in terms of their sense of familiarity. The novel stimuli were primed or were not primed by the known stimulus (Exp. 1) or primed by the known vs a novel stimulus (Exp. 2 and 3). The priming was subliminal in all the experiments. Reaction times were measured in Exps. 2 and 3. The participants assessed the novel stimuli as more familiar when they were preceded by a known stimulus than when they were not (Exp. 1) or when they were preceded by a novel stimulus (Exps. 2 and 3). Reaction times were longer for assessments preceded by known stimulus than for assessments preceded by a novel stimulus, which contradicts the priming explanations. The results seem to support all six points of the proposed model of the mechanisms underlying the déjà vu experience.

  16. Gene set differential analysis of time course expression profiles via sparse estimation in functional logistic model with application to time-dependent biomarker detection.

    PubMed

    Kayano, Mitsunori; Matsui, Hidetoshi; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru

    2016-04-01

    High-throughput time course expression profiles have been available in the last decade due to developments in measurement techniques and devices. Functional data analysis, which treats smoothed curves instead of originally observed discrete data, is effective for the time course expression profiles in terms of dimension reduction, robustness, and applicability to data measured at small and irregularly spaced time points. However, the statistical method of differential analysis for time course expression profiles has not been well established. We propose a functional logistic model based on elastic net regularization (F-Logistic) in order to identify the genes with dynamic alterations in case/control study. We employ a mixed model as a smoothing method to obtain functional data; then F-Logistic is applied to time course profiles measured at small and irregularly spaced time points. We evaluate the performance of F-Logistic in comparison with another functional data approach, i.e. functional ANOVA test (F-ANOVA), by applying the methods to real and synthetic time course data sets. The real data sets consist of the time course gene expression profiles for long-term effects of recombinant interferon β on disease progression in multiple sclerosis. F-Logistic distinguishes dynamic alterations, which cannot be found by competitive approaches such as F-ANOVA, in case/control study based on time course expression profiles. F-Logistic is effective for time-dependent biomarker detection, diagnosis, and therapy. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Light beam range finder

    DOEpatents

    McEwan, Thomas E.

    1998-01-01

    A "laser tape measure" for measuring distance which includes a transmitter such as a laser diode which transmits a sequence of electromagnetic pulses in response to a transmit timing signal. A receiver samples reflections from objects within the field of the sequence of visible electromagnetic pulses with controlled timing, in response to a receive timing signal. The receiver generates a sample signal in response to the samples which indicates distance to the object causing the reflections. The timing circuit supplies the transmit timing signal to the transmitter and supplies the receive timing signal to the receiver. The receive timing signal causes the receiver to sample the reflection such that the time between transmission of pulses in the sequence in sampling by the receiver sweeps over a range of delays. The transmit timing signal causes the transmitter to transmit the sequence of electromagnetic pulses at a pulse repetition rate, and the received timing signal sweeps over the range of delays in a sweep cycle such that reflections are sampled at the pulse repetition rate and with different delays in the range of delays, such that the sample signal represents received reflections in equivalent time. The receiver according to one aspect of the invention includes an avalanche photodiode and a sampling gate coupled to the photodiode which is responsive to the received timing signal. The transmitter includes a laser diode which supplies a sequence of visible electromagnetic pulses. A bright spot projected on to the target clearly indicates the point that is being measured, and the user can read the range to that point with precision of better than 0.1%.

  18. Light beam range finder

    DOEpatents

    McEwan, T.E.

    1998-06-16

    A ``laser tape measure`` for measuring distance is disclosed which includes a transmitter such as a laser diode which transmits a sequence of electromagnetic pulses in response to a transmit timing signal. A receiver samples reflections from objects within the field of the sequence of visible electromagnetic pulses with controlled timing, in response to a receive timing signal. The receiver generates a sample signal in response to the samples which indicates distance to the object causing the reflections. The timing circuit supplies the transmit timing signal to the transmitter and supplies the receive timing signal to the receiver. The receive timing signal causes the receiver to sample the reflection such that the time between transmission of pulses in the sequence in sampling by the receiver sweeps over a range of delays. The transmit timing signal causes the transmitter to transmit the sequence of electromagnetic pulses at a pulse repetition rate, and the received timing signal sweeps over the range of delays in a sweep cycle such that reflections are sampled at the pulse repetition rate and with different delays in the range of delays, such that the sample signal represents received reflections in equivalent time. The receiver according to one aspect of the invention includes an avalanche photodiode and a sampling gate coupled to the photodiode which is responsive to the received timing signal. The transmitter includes a laser diode which supplies a sequence of visible electromagnetic pulses. A bright spot projected on to the target clearly indicates the point that is being measured, and the user can read the range to that point with precision of better than 0.1%. 7 figs.

  19. Contact angle of unset elastomeric impression materials.

    PubMed

    Menees, Timothy S; Radhakrishnan, Rashmi; Ramp, Lance C; Burgess, John O; Lawson, Nathaniel C

    2015-10-01

    Some elastomeric impression materials are hydrophobic, and it is often necessary to take definitive impressions of teeth coated with some saliva. New hydrophilic materials have been developed. The purpose of this in vitro study was to compare contact angles of water and saliva on 7 unset elastomeric impression materials at 5 time points from the start of mixing. Two traditional polyvinyl siloxane (PVS) (Aquasil, Take 1), 2 modified PVS (Imprint 4, Panasil), a polyether (Impregum), and 2 hybrid (Identium, EXA'lence) materials were compared. Each material was flattened to 2 mm and a 5 μL drop of distilled water or saliva was dropped on the surface at 25 seconds (t0) after the start of mix. Contact angle measurements were made with a digital microscope at initial contact (t0), t1=2 seconds, t2=5 seconds, t3=50% working time, and t4=95% working time. Data were analyzed with a generalized linear mixed model analysis, and individual 1-way ANOVA and Tukey HSD post hoc tests (α=.05). For water, materials grouped into 3 categories at all time-points: the modified PVS and one hybrid material (Identium) produced the lowest contact angles, the polyether material was intermediate, and the traditional PVS materials and the other hybrid (EXA'lence) produced the highest contact angles. For saliva, Identium, Impregum, and Imprint 4 were in the group with the lowest contact angle at most time points. Modified PVS materials and one of the hybrid materials are more hydrophilic than traditional PVS materials when measured with water. Saliva behaves differently than water in contact angle measurement on unset impression material and produces a lower contact angle on polyether based materials. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cornwell, Paris A; Bunn, Jeffrey R; Schmidlin, Joshua E

    The December 2010 version of the guide, ORNL/TM-2008/159, by Jeff Bunn, Josh Schmidlin, Camden Hubbard, and Paris Cornwell, has been further revised due to a major change in the GeoMagic Studio software for constructing a surface model. The Studio software update also includes a plug-in module to operate the FARO Scan Arm. Other revisions for clarity were also made. The purpose of this revision document is to guide the reader through the process of laser alignment used by NRSF2 at HFIR and VULCAN at SNS. This system was created to increase the spatial accuracy of the measurement points in amore » sample, reduce the use of neutron time used for alignment, improve experiment planning, and reduce operator error. The need for spatial resolution has been driven by the reduction in gauge volumes to the sub-millimeter level, steep strain gradients in some samples, and requests to mount multiple samples within a few days for relating data from each sample to a common sample coordinate system. The first step in this process involves mounting the sample on an indexer table in a laboratory set up for offline sample mounting and alignment in the same manner it would be mounted at either instrument. In the shared laboratory, a FARO ScanArm is used to measure the coordinates of points on the sample surface ('point cloud'), specific features and fiducial points. A Sample Coordinate System (SCS) needs to be established first. This is an advantage of the technique because the SCS can be defined in such a way to facilitate simple definition of measurement points within the sample. Next, samples are typically mounted to a frame of 80/20 and fiducial points are attached to the sample or frame then measured in the established sample coordinate system. The laser scan probe on the ScanArm can then be used to scan in an 'as-is' model of the sample as well as mounting hardware. GeoMagic Studio 12 is the software package used to construct the model from the point cloud the scan arm creates. Once a model, fiducial, and measurement files are created, a special program, called SScanSS combines the information and by simulation of the sample on the diffractometer can help plan the experiment before using neutron time. Finally, the sample is mounted on the relevant stress measurement instrument and the fiducial points are measured again. In the HFIR beam room, a laser tracker is used in conjunction with a program called CAM2 to measure the fiducial points in the NRSF2 instrument's sample positioner coordinate system. SScanSS is then used again to perform a coordinate system transformation of the measurement file locations to the sample positioner coordinate system. A procedure file is then written with the coordinates in the sample positioner coordinate system for the desired measurement locations. This file is often called a script or command file and can be further modified using excel. It is very important to note that this process is not a linear one, but rather, it often is iterative. Many of the steps in this guide are interdependent on one another. It is very important to discuss the process as it pertains to the specific sample being measured. What works with one sample may not necessarily work for another. This guide attempts to provide a typical work flow that has been successful in most cases.« less

  1. Shared decision making and behavioral impairment: a national study among children with special health care needs

    PubMed Central

    2012-01-01

    Background The Institute of Medicine has prioritized shared decision making (SDM), yet little is known about the impact of SDM over time on behavioral outcomes for children. This study examined the longitudinal association of SDM with behavioral impairment among children with special health care needs (CSHCN). Method CSHCN aged 5-17 years in the 2002-2006 Medical Expenditure Panel Survey were followed for 2 years. The validated Columbia Impairment Scale measured impairment. SDM was measured with 7 items addressing the 4 components of SDM. The main exposures were (1) the mean level of SDM across the 2 study years and (2) the change in SDM over the 2 years. Using linear regression, we measured the association of SDM and behavioral impairment. Results Among 2,454 subjects representing 10.2 million CSHCN, SDM increased among 37% of the population, decreased among 36% and remained unchanged among 27%. For CSHCN impaired at baseline, the change in SDM was significant with each 1-point increase in SDM over time associated with a 2-point decrease in impairment (95% CI: 0.5, 3.4), whereas the mean level of SDM was not associated with impairment. In contrast, among those below the impairment threshold, the mean level of SDM was significant with each one point increase in the mean level of SDM associated with a 1.1-point decrease in impairment (0.4, 1.7), but the change was not associated with impairment. Conclusion Although the change in SDM may be more important for children with behavioral impairment and the mean level over time for those below the impairment threshold, results suggest that both the change in SDM and the mean level may impact behavioral health for CSHCN. PMID:22998626

  2. Revised Correlation between Odin/OSIRIS PMC Properties and Coincident TIMED/SABER Mesospheric Temperatures

    NASA Technical Reports Server (NTRS)

    Feofilov, A. G.; Petelina, S V.; Kutepov, A. A.; Pesnell, W. D.; Goldberg, R. A.; Llewellyn, E. J.; Russell, J. M.

    2006-01-01

    The Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on board the Odin satellite detects Polar Mesospheric Clouds (PMCs) through the enhancement in the limb scattered solar radiance. The Sounding of the Atmosphere using the Broadband Emission Radiometry (SABER) instrument on board the TIMED satellite is a limb scanning infrared radiometer that measures temperature and vertical profiles and energetic parameters for minor constituents in the mesosphere and lower thermosphere. The combination of OSIRIS and SABER data has been previously used to statistically derive thermal conditions for PMC existence [Petelina et al., 2005]. In this work, we employ the simultaneous common volume measurements of PMCs by OSIRIS and temperature profiles measured by SABER for the Northern Hemisphere summers of 2002-2005 and corrected in the polar region by accounting for the vibrational-vibrational energy exchange among the CO2 isotopes [Kutepov et al., 2006]. For each of 20 coincidences identified within plus or minus 1 degree latitude, plus or minus 2 degrees longitude and less than 1 hour time the frost point temperatures were calculated using the corresponding SABER temperature profile and water vapor densities of 1,3, and 10 ppmv. We found that the PMC presence and brightness correlated only with the temperature threshold that corresponds to the frost point. The absolute value of the temperature below the frost point, however, didn't play a significant role in the intensity of PMC signal for the majority of selected coincidences. The presence of several bright clouds at temperatures above the frost point is obviously related to the limitation of the limb geometry when some near- or far-field PMCs located at higher (and warmer) altitudes appear to be at lower altitudes.

  3. Revised Correlation between Odin/OSIRIS PMC Properties and Coincident TIMED/SABER Mesospheric Temperatures

    NASA Technical Reports Server (NTRS)

    Feofilov, A. G.; Petelina, S. V.; Kutepov, A. A.; Pesnell, W. D.; Goldberg, R. A.; Llewellyn, E. J.; Russell, J. M.

    2006-01-01

    The Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on board the Odin satellite detects Polar Mesospheric Clouds (PMCs) through the enhancement in the limb-scattered solar radiance. The Sounding of the Atmosphere using the Broadband Emission Radiometry (SABER) instrument on board the TIMED satellite is a limb scanning infrared radiometer that measures temperature and vertical profiles and energetic parameters for minor constituents in the mesosphere and lower thermosphere. The combination of OSIRIS and SABER data has been previously used to statistically derive thermal conditions for PMC existence [Petelina et al., 2005]. a, A.A. Kutepov, W.D. Pesnell, In this work, we employ the simultaneous common volume measurements of PMCs by OSIRIS and temperature profiles measured by SABER for the Northern Hemisphere summers of 2002-2005 and corrected in the polar region by accounting for the vibrational-vibrational energy exchange among the CO2 isotopes [Kutepov et al., 2006]. For each of 20 coincidences identified within plus or minus 1 degree latitude, plus or minus 2 degrees longitude and less than 1 hour time the frost point temperatures were calculated using the corresponding SABER temperature profile and water vapor densities of 1,3, and 10 ppmv. We found that the PMC presence and brightness correlated only with the temperature threshold that corresponds to the frost point. The absolute value of the temperature below the frost point, however, didn't play a significant role in the intensity of PMC signal for the majority of selected coincidences. The presence of several bright clouds at temperatures above the frost point is obviously related to the limitation of the limb geometry when some near- or far-field PMCs located at higher (and warmer) altitudes appear to be at lower altitudes.

  4. 40 CFR 797.1300 - Daphnid acute toxicity test.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... continuous exposure over a specified period of time. In this guideline, the effect measured is immobilization... at a point in time, or passing through the test chamber during a specific interval. (7) Static system..., daphnids which have been cultured and acclimated in accordance with the test design are randomly placed...

  5. 40 CFR 797.1300 - Daphnid acute toxicity test.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... continuous exposure over a specified period of time. In this guideline, the effect measured is immobilization... at a point in time, or passing through the test chamber during a specific interval. (7) Static system..., daphnids which have been cultured and acclimated in accordance with the test design are randomly placed...

  6. 40 CFR 797.1300 - Daphnid acute toxicity test.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... continuous exposure over a specified period of time. In this guideline, the effect measured is immobilization... at a point in time, or passing through the test chamber during a specific interval. (7) Static system..., daphnids which have been cultured and acclimated in accordance with the test design are randomly placed...

  7. 40 CFR 797.1300 - Daphnid acute toxicity test.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... continuous exposure over a specified period of time. In this guideline, the effect measured is immobilization... at a point in time, or passing through the test chamber during a specific interval. (7) Static system..., daphnids which have been cultured and acclimated in accordance with the test design are randomly placed...

  8. 40 CFR 797.1300 - Daphnid acute toxicity test.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... continuous exposure over a specified period of time. In this guideline, the effect measured is immobilization... at a point in time, or passing through the test chamber during a specific interval. (7) Static system..., daphnids which have been cultured and acclimated in accordance with the test design are randomly placed...

  9. Comparison of FRF measurements and mode shapes determined using optically image based, laser, and accelerometer measurements

    NASA Astrophysics Data System (ADS)

    Warren, Christopher; Niezrecki, Christopher; Avitabile, Peter; Pingle, Pawan

    2011-08-01

    Today, accelerometers and laser Doppler vibrometers are widely accepted as valid measurement tools for structural dynamic measurements. However, limitations of these transducers prevent the accurate measurement of some phenomena. For example, accelerometers typically measure motion at a limited number of discrete points and can mass load a structure. Scanning laser vibrometers have a very wide frequency range and can measure many points without mass-loading, but are sensitive to large displacements and can have lengthy acquisition times due to sequential measurements. Image-based stereo-photogrammetry techniques provide additional measurement capabilities that compliment the current array of measurement systems by providing an alternative that favors high-displacement and low-frequency vibrations typically difficult to measure with accelerometers and laser vibrometers. Within this paper, digital image correlation, three-dimensional (3D) point-tracking, 3D laser vibrometry, and accelerometer measurements are all used to measure the dynamics of a structure to compare each of the techniques. Each approach has its benefits and drawbacks, so comparative measurements are made using these approaches to show some of the strengths and weaknesses of each technique. Additionally, the displacements determined using 3D point-tracking are used to calculate frequency response functions, from which mode shapes are extracted. The image-based frequency response functions (FRFs) are compared to those obtained by collocated accelerometers. Extracted mode shapes are then compared to those of a previously validated finite element model (FEM) of the test structure and are shown to have excellent agreement between the FEM and the conventional measurement approaches when compared using the Modal Assurance Criterion (MAC) and Pseudo-Orthogonality Check (POC).

  10. Prevalence and co-occurrence of addictive behaviors among former alternative high school youth: A longitudinal follow-up study.

    PubMed

    Sussman, Steve; Pokhrel, Pallav; Sun, Ping; Rohrbach, Louise A; Spruijt-Metz, Donna

    2015-09-01

    Recent work has studied addictions using a matrix measure, which taps multiple addictions through single responses for each type. This is the first longitudinal study using a matrix measure. We investigated the use of this approach among former alternative high school youth (average age = 19.8 years at baseline; longitudinal n = 538) at risk for addictions. Lifetime and last 30-day prevalence of one or more of 11 addictions reviewed in other work was the primary focus (i.e., cigarettes, alcohol, hard drugs, shopping, gambling, Internet, love, sex, eating, work, and exercise). These were examined at two time-points one year apart. Latent class and latent transition analyses (LCA and LTA) were conducted in Mplus. Prevalence rates were stable across the two time-points. As in the cross-sectional baseline analysis, the 2-class model (addiction class, non-addiction class) fit the data better at follow-up than models with more classes. Item-response or conditional probabilities for each addiction type did not differ between time-points. As a result, the LTA model utilized constrained the conditional probabilities to be equal across the two time-points. In the addiction class, larger conditional probabilities (i.e., 0.40-0.49) were found for love, sex, exercise, and work addictions; medium conditional probabilities (i.e., 0.17-0.27) were found for cigarette, alcohol, other drugs, eating, Internet and shopping addiction; and a small conditional probability (0.06) was found for gambling. Persons in an addiction class tend to remain in this addiction class over a one-year period.

  11. Measuring hemoglobin amount and oxygen saturation of skin with advancing age

    NASA Astrophysics Data System (ADS)

    Watanabe, Shumpei; Yamamoto, Satoshi; Yamauchi, Midori; Tsumura, Norimichi; Ogawa-Ochiai, Keiko; Akiba, Tetsuo

    2012-03-01

    We measured the oxygen saturation of skin at various ages using our previously proposed method that can rapidly simulate skin spectral reflectance with high accuracy. Oxygen saturation is commonly measured by a pulse oximeter to evaluate oxygen delivery for monitoring the functions of heart and lungs at a specific time. On the other hand, oxygen saturation of skin is expected to assess peripheral conditions. Our previously proposed method, the optical path-length matrix method (OPLM), is based on a Monte Carlo for multi-layered media (MCML), but can simulate skin spectral reflectance 27,000 times faster than MCML. In this study, we implemented an iterative simulation of OPLM with a nonlinear optimization technique such that this method can also be used for estimating hemoglobin concentration and oxygen saturation from the measured skin spectral reflectance. In the experiments, the skin reflectance spectra of 72 outpatients aged between 20 and 86 years were measured by a spectrophotometer. Three points were measured for each subject: the forearm, the thenar eminence, and the intermediate phalanx. The result showed that the oxygen saturation of skin remained constant at each point as the age varied.

  12. Improvements of the two-dimensional FDTD method for the simulation of normal- and superconducting planar waveguides using time series analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofschen, S.; Wolff, I.

    1996-08-01

    Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are comparedmore » with measurements and show good agreement.« less

  13. International Aviation (Selected Articles)

    DTIC Science & Technology

    1991-09-11

    THE ANAYLYSIS OF DYNAMIC FORCES IN AVIATION STRUCTURES Following along with the development of test manufacturing projects for many types of aircraft...type water troughs. All the main equipment embodies automated measurement controls. It is capable of obtaining test data and curves in a real time...results from thousands of calculations, and decisions were made to select the imaginary origin point to act as the turbulence flow origination point

  14. How to use the Sun-Earth Lagrange points for fundamental physics and navigation

    NASA Astrophysics Data System (ADS)

    Tartaglia, A.; Lorenzini, E. C.; Lucchesi, D.; Pucacco, G.; Ruggiero, M. L.; Valko, P.

    2018-01-01

    We illustrate the proposal, nicknamed LAGRANGE, to use spacecraft, located at the Sun-Earth Lagrange points, as a physical reference frame. Performing time of flight measurements of electromagnetic signals traveling on closed paths between the points, we show that it would be possible: (a) to refine gravitational time delay knowledge due both to the Sun and the Earth; (b) to detect the gravito-magnetic frame dragging of the Sun, so deducing information about the interior of the star; (c) to check the possible existence of a galactic gravitomagnetic field, which would imply a revision of the properties of a dark matter halo; (d) to set up a relativistic positioning and navigation system at the scale of the inner solar system. The paper presents estimated values for the relevant quantities and discusses the feasibility of the project analyzing the behavior of the space devices close to the Lagrange points.

  15. Development of a Comprehensive Assessment of Food Parenting Practices: The Home Self-Administered Tool for Environmental Assessment of Activity and Diet Family Food Practices Survey.

    PubMed

    Vaughn, Amber E; Dearth-Wesley, Tracy; Tabak, Rachel G; Bryant, Maria; Ward, Dianne S

    2017-02-01

    Parents' food parenting practices influence children's dietary intake and risk for obesity and chronic disease. Understanding the influence and interactions between parents' practices and children's behavior is limited by a lack of development and psychometric testing and/or limited scope of current measures. The Home Self-Administered Tool for Environmental Assessment of Activity and Diet (HomeSTEAD) was created to address this gap. This article describes development and psychometric testing of the HomeSTEAD family food practices survey. Between August 2010 and May 2011, a convenience sample of 129 parents of children aged 3 to 12 years were recruited from central North Carolina and completed the self-administered HomeSTEAD survey on three occasions during a 12- to 18-day window. Demographic characteristics and child diet were assessed at Time 1. Child height and weight were measured during the in-home observations (following Time 1 survey). Exploratory factor analysis with Time 1 data was used to identify potential scales. Scales with more than three items were examined for scale reduction. Following this, mean scores were calculated at each time point. Construct validity was assessed by examining Spearman rank correlations between mean scores (Time 1) and children's diet (fruits and vegetables, sugar-sweetened beverages, snacks, sweets) and body mass index (BMI) z scores. Repeated measures analysis of variance was used to examine differences in mean scores between time points, and single-measure intraclass correlations were calculated to examine test-retest reliability between time points. Exploratory factor analysis identified 24 factors and retained 124 items; however, scale reduction narrowed items to 86. The final instrument captures five coercive control practices (16 items), seven autonomy support practices (24 items), and 12 structure practices (46 items). All scales demonstrated good internal reliability (α>.62), 18 factors demonstrated construct validity (significant association with child diet, P<0.05), and 22 demonstrated good reliability (intraclass correlation coefficient>0.61). The HomeSTEAD family food practices survey provides a brief, yet comprehensive and psychometrically sound assessment of food parenting practices. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  16. Micromixer-based time-resolved NMR: applications to ubiquitin protein conformation.

    PubMed

    Kakuta, Masaya; Jayawickrama, Dimuthu A; Wolters, Andrew M; Manz, Andreas; Sweedler, Jonathan V

    2003-02-15

    Time-resolved NMR spectroscopy is used to studychanges in protein conformation based on the elapsed time after a change in the solvent composition of a protein solution. The use of a micromixer and a continuous-flow method is described where the contents of two capillary flows are mixed rapidly, and then the NMR spectra of the combined flow are recorded at precise time points. The distance after mixing the two fluids and flow rates define the solvent-protein interaction time; this method allows the measurement of NMR spectra at precise mixing time points independent of spectral acquisition time. Integration of a micromixer and a microcoil NMR probe enables low-microliter volumes to be used without losing significant sensitivity in the NMR measurement. Ubiquitin, the model compound, changes its conformation from native to A-state at low pH and in 40% or higher methanol/water solvents. Proton NMR resonances of the His-68 and the Tyr-59 of ubiquitin are used to probe the conformational changes. Mixing ubiquitin and methanol solutions under low pH at microliter per minute flow rates yields both native and A-states. As the flow rate decreases, yielding longer reaction times, the population of the A-state increases. The micromixer-NMR system can probe reaction kinetics on a time scale of seconds.

  17. A solution for measuring accurate reaction time to visual stimuli realized with a programmable microcontroller.

    PubMed

    Ohyanagi, Toshio; Sengoku, Yasuhito

    2010-02-01

    This article presents a new solution for measuring accurate reaction time (SMART) to visual stimuli. The SMART is a USB device realized with a Cypress Programmable System-on-Chip (PSoC) mixed-signal array programmable microcontroller. A brief overview of the hardware and firmware of the PSoC is provided, together with the results of three experiments. In Experiment 1, we investigated the timing accuracy of the SMART in measuring reaction time (RT) under different conditions of operating systems (OSs; Windows XP or Vista) and monitor displays (a CRT or an LCD). The results indicated that the timing error in measuring RT by the SMART was less than 2 msec, on average, under all combinations of OS and display and that the SMART was tolerant to jitter and noise. In Experiment 2, we tested the SMART with 8 participants. The results indicated that there was no significant difference among RTs obtained with the SMART under the different conditions of OS and display. In Experiment 3, we used Microsoft (MS) PowerPoint to present visual stimuli on the display. We found no significant difference in RTs obtained using MS DirectX technology versus using the PowerPoint file with the SMART. We are certain that the SMART is a simple and practical solution for measuring RTs accurately. Although there are some restrictions in using the SMART with RT paradigms, the SMART is capable of providing both researchers and health professionals working in clinical settings with new ways of using RT paradigms in their work.

  18. The comparability of the universalism value over time and across countries in the European Social Survey: exact vs. approximate measurement invariance

    PubMed Central

    Zercher, Florian; Schmidt, Peter; Cieciuch, Jan; Davidov, Eldad

    2015-01-01

    Over the last decades, large international datasets such as the European Social Survey (ESS), the European Value Study (EVS) and the World Value Survey (WVS) have been collected to compare value means over multiple time points and across many countries. Yet analyzing comparative survey data requires the fulfillment of specific assumptions, i.e., that these values are comparable over time and across countries. Given the large number of groups that can be compared in repeated cross-national datasets, establishing measurement invariance has been, however, considered unrealistic. Indeed, studies which did assess it often failed to establish higher levels of invariance such as scalar invariance. In this paper we first introduce the newly developed approximate approach based on Bayesian structural equation modeling (BSEM) to assess cross-group invariance over countries and time points and contrast the findings with the results from the traditional exact measurement invariance test. BSEM examines whether measurement parameters are approximately (rather than exactly) invariant. We apply BSEM to a subset of items measuring the universalism value from the Portrait Values Questionnaire (PVQ) in the ESS. The invariance of this value is tested simultaneously across 15 ESS countries over six ESS rounds with 173,071 respondents and 90 groups in total. Whereas, the use of the traditional approach only legitimates the comparison of latent means of 37 groups, the Bayesian procedure allows the latent mean comparison of 73 groups. Thus, our empirical application demonstrates for the first time the BSEM test procedure on a particularly large set of groups. PMID:26089811

  19. Latanoprostene Bunod 0.024% versus Timolol Maleate 0.5% in Subjects with Open-Angle Glaucoma or Ocular Hypertension: The APOLLO Study.

    PubMed

    Weinreb, Robert N; Scassellati Sforzolini, Baldo; Vittitow, Jason; Liebmann, Jeffrey

    2016-05-01

    To compare the diurnal intraocular pressure (IOP)-lowering effect of latanoprostene bunod (LBN) ophthalmic solution 0.024% every evening (qpm) with timolol maleate 0.5% twice daily (BID) in subjects with open-angle glaucoma (OAG) or ocular hypertension (OHT). Phase 3, randomized, controlled, multicenter, double-masked, parallel-group clinical study. Subjects aged ≥18 years with a diagnosis of OAG or OHT in 1 or both eyes. Subjects were randomized (2:1) to a 3-month regimen of LBN 0.024% qpm or timolol 0.5% 1 drop BID. Intraocular pressure was measured at 8 am, 12 pm, and 4 pm of each postrandomization visit (week 2, week 6, and month 3). Adverse events were recorded throughout the study. The primary efficacy end point was IOP in the study eye measured at each of the 9 assessment time points. Secondary efficacy end points included the proportion of subjects with IOP ≤18 mmHg consistently at all 9 time points and the proportion of subjects with IOP reduction ≥25% consistently at all 9 time points. Of 420 subjects randomized, 387 completed the study (LBN 0.024%, n = 264; timolol 0.5%, n = 123). At all 9 time points, the mean IOP in the study eye was significantly lower in the LBN 0.024% group than in the timolol 0.5% group (P ≤ 0.002). At all 9 time points, the percentage of subjects with mean IOP ≤18 mmHg and the percentage with IOP reduction ≥25% were significantly higher in the LBN 0.024% group versus the timolol 0.5% group (mean IOP ≤18 mmHg: 22.9% vs. 11.3%, P = 0.005; IOP reduction ≥25%: 34.9% vs. 19.5%, P = 0.001). Adverse events were similar in both treatment groups. In this phase 3 study, LBN 0.024% qpm demonstrated significantly greater IOP lowering than timolol 0.5% BID throughout the day over 3 months of treatment. Latanoprostene bunod 0.024% was effective and safe in these adults with OAG or OHT. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  20. Systematic identification of an integrative network module during senescence from time-series gene expression.

    PubMed

    Park, Chihyun; Yun, So Jeong; Ryu, Sung Jin; Lee, Soyoung; Lee, Young-Sam; Yoon, Youngmi; Park, Sang Chul

    2017-03-15

    Cellular senescence irreversibly arrests growth of human diploid cells. In addition, recent studies have indicated that senescence is a multi-step evolving process related to important complex biological processes. Most studies analyzed only the genes and their functions representing each senescence phase without considering gene-level interactions and continuously perturbed genes. It is necessary to reveal the genotypic mechanism inferred by affected genes and their interaction underlying the senescence process. We suggested a novel computational approach to identify an integrative network which profiles an underlying genotypic signature from time-series gene expression data. The relatively perturbed genes were selected for each time point based on the proposed scoring measure denominated as perturbation scores. Then, the selected genes were integrated with protein-protein interactions to construct time point specific network. From these constructed networks, the conserved edges across time point were extracted for the common network and statistical test was performed to demonstrate that the network could explain the phenotypic alteration. As a result, it was confirmed that the difference of average perturbation scores of common networks at both two time points could explain the phenotypic alteration. We also performed functional enrichment on the common network and identified high association with phenotypic alteration. Remarkably, we observed that the identified cell cycle specific common network played an important role in replicative senescence as a key regulator. Heretofore, the network analysis from time series gene expression data has been focused on what topological structure was changed over time point. Conversely, we focused on the conserved structure but its context was changed in course of time and showed it was available to explain the phenotypic changes. We expect that the proposed method will help to elucidate the biological mechanism unrevealed by the existing approaches.

  1. Ground-state fidelity and bipartite entanglement in the Bose-Hubbard model.

    PubMed

    Buonsante, P; Vezzani, A

    2007-03-16

    We analyze the quantum phase transition in the Bose-Hubbard model borrowing two tools from quantum-information theory, i.e., the ground-state fidelity and entanglement measures. We consider systems at unitary filling comprising up to 50 sites and show for the first time that a finite-size scaling analysis of these quantities provides excellent estimates for the quantum critical point. We conclude that fidelity is particularly suited for revealing a quantum phase transition and pinning down the critical point thereof, while the success of entanglement measures depends on the mechanisms governing the transition.

  2. ASRDI oxygen technology survey. Volume 4: Low temperature measurement

    NASA Technical Reports Server (NTRS)

    Sparks, L. L.

    1974-01-01

    Information is presented on temperature measurement between the triple point and critical point of liquid oxygen. The criterion selected is that all transducers which may reasonably be employed in the liquid oxygen (LO2) temperature range are considered. The temperature range for each transducer is the appropriate full range for the particular thermometer. The discussion of each thermometer or type of thermometer includes the following information: (1) useful temperature range, (2) general and particular methods of construction and the advantages of each type, (3) specifications (accuracy, reproducibility, response time, etc.), (4) associated instrumentation, (5) calibrations and procedures, and (6) analytical representations.

  3. Application of DPIV to Enhanced Mixing Heated Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Bridges, James

    2002-01-01

    Digital Particle Imaging Velocimetry (DPIV) is a planar velocity measurement technique that continues to be applied to new and challenging engineering research facilities while significantly reducing facility test time. DPIV was used in the GRC Nozzle Acoustic Test Rig (NATR) to characterize the high temperature (560 C), high speed (is greater than 500 m/s) flow field properties of mixing enhanced jet engine nozzles. The instantaneous velocity maps obtained using DPIV were used to determine mean velocity, rms velocity and two-point correlation statistics to verify the true turbulence characteristics of the flow. These measurements will ultimately be used to properly validate aeroacoustic model predictions by verifying CFD input to these models. These turbulence measurements have previously not been possible in hot supersonic jets. Mapping the nozzle velocity field using point based techniques requires over 60 hours of test time, compared to less than 45 minutes using DPIV, yielding a significant reduction in testing time. A dual camera DPIV configuration was used to maximize the field of view and further minimize the testing time required to map the nozzle flow. The DPIV system field of view covered 127 by 267 mm. Data were acquired at 19 axial stations providing coverage of the flow from the nozzle exit to 2.37 in downstream. At each measurement station, 400 image frame pairs were acquired from each camera. The DPIV measurements of the mixing enhanced nozzle designs illustrate the changes in the flow field resulting in the reduced noise signature.

  4. Imaging performance of a LaBr3-based PET scanner

    PubMed Central

    Daube-Witherspoon, M E; Surti, S; Perkins, A; Kyba, C C M; Wiener, R; Werner, M E; Kulp, R; Karp, J S

    2010-01-01

    A prototype time-of-flight (TOF) PET scanner based on cerium-doped lanthanum bromide [LaBr3 (5% Ce)] has been developed. LaBr3 has high light output, excellent energy resolution, and fast timing properties that have been predicted to lead to good image quality. Intrinsic performance measurements of spatial resolution, sensitivity, and scatter fraction demonstrate good conventional PET performance; the results agree with previous simulation studies. Phantom measurements show the excellent image quality achievable with the prototype system. Phantom measurements and corresponding simulations show a faster and more uniform convergence rate, as well as more uniform quantification, for TOF reconstruction of the data, which have 375-ps intrinsic timing resolution, compared to non-TOF images. Measurements and simulations of a hot and cold sphere phantom show that the 7% energy resolution helps to mitigate residual errors in the scatter estimate because a high energy threshold (>480 keV) can be used to restrict the amount of scatter accepted without a loss of true events. Preliminary results with incorporation of a model of detector blurring in the iterative reconstruction algorithm show improved contrast recovery but also point out the importance of an accurate resolution model of the tails of LaBr3’s point spread function. The LaBr3 TOF-PET scanner has demonstrated the impact of superior timing and energy resolutions on image quality. PMID:19949259

  5. Genetic influences on the cognitive biases associated with anxiety and depression symptoms in adolescents.

    PubMed

    Zavos, Helena M S; Rijsdijk, Frühling V; Gregory, Alice M; Eley, Thalia C

    2010-07-01

    There is a substantial overlap between genes affecting anxiety and depression. Both anxiety and depression are associated with cognitive biases such as anxiety sensitivity and attributional style. Little, however, is known about the relationship between these variables and whether these too are genetically correlated. Self-reports of anxiety sensitivity, anxiety symptoms, attributional style and depression symptoms were obtained for over 1300 adolescent twin and sibling pairs at two time points. The magnitude of genetic and environmental influences on the measures was examined. Strongest associations were found between anxiety sensitivity and anxiety ratings at both measurement times (r=.70, .72) and between anxiety and depression (r=.62 at both time points). Correlations between the cognitive biases were modest at time 1 (r=-.12) and slightly larger at time 2 (r=-.31). All measures showed moderate genetic influence. Generally genetic correlations reflected phenotypic correlations. Thus the highest genetic correlations were between anxiety sensitivity and anxiety ratings (.86, .87) and between anxiety and depression ratings (.77, .71). Interestingly, depression ratings also showed a high genetic correlation with anxiety sensitivity (.70, .76). Genetic correlations between the cognitive bias measures were moderate (-.31, -.46). The sample consists primarily of twins, there are limitations associated with the twin design. Cognitive biases associated with depression and anxiety are not as genetically correlated as anxiety and depression ratings themselves. Further research into the cognitive processes related to anxiety and depression will facilitate understanding of the relationship between bias and symptoms.

  6. Measurements and theoretical interpretation of points of zero charge/potential of BSA protein.

    PubMed

    Salis, Andrea; Boström, Mathias; Medda, Luca; Cugia, Francesca; Barse, Brajesh; Parsons, Drew F; Ninham, Barry W; Monduzzi, Maura

    2011-09-20

    The points of zero charge/potential of proteins depend not only on pH but also on how they are measured. They depend also on background salt solution type and concentration. The protein isoelectric point (IEP) is determined by electrokinetical measurements, whereas the isoionic point (IIP) is determined by potentiometric titrations. Here we use potentiometric titration and zeta potential (ζ) measurements at different NaCl concentrations to study systematically the effect of ionic strength on the IEP and IIP of bovine serum albumin (BSA) aqueous solutions. It is found that high ionic strengths produce a shift of both points toward lower (IEP) and higher (IIP) pH values. This result was already reported more than 60 years ago. At that time, the only available theory was the purely electrostatic Debye-Hückel theory. It was not able to predict the opposite trends of IIP and IEP with ionic strength increase. Here, we extend that theory to admit both electrostatic and nonelectrostatic (NES) dispersion interactions. The use of a modified Poisson-Boltzmann equation for a simple model system (a charge regulated spherical colloidal particle in NaCl salt solutions), that includes these ion specific interactions, allows us to explain the opposite trends observed for isoelectric point (zero zeta potential) and isoionic point (zero protein charge) of BSA. At higher concentrations, an excess of the anion (with stronger NES interactions than the cation) is adsorbed at the surface due to an attractive ionic NES potential. This makes the potential relatively more negative. Consequently, the IEP is pushed toward lower pH. But the charge regulation condition means that the surface charge becomes relatively more positive as the surface potential becomes more negative. Consequently, the IIP (measuring charge) shifts toward higher pH as concentration increases, in the opposite direction from the IEP (measuring potential). © 2011 American Chemical Society

  7. The Feasibility and Potential Impact of Brain Training Games on Cognitive and Emotional Functioning in Middle-Aged Adults.

    PubMed

    McLaughlin, Paula M; Curtis, Ashley F; Branscombe-Caird, Laura M; Comrie, Janna K; Murtha, Susan J E

    2018-02-01

    To investigate whether a commercially available brain training program is feasible to use with a middle-aged population and has a potential impact on cognition and emotional well-being (proof of concept). Fourteen participants (ages 46-55) completed two 6-week training conditions using a crossover (counterbalanced) design: (1) experimental brain training condition and (2) active control "find answers to trivia questions online" condition. A comprehensive neurocognitive battery and a self-report measure of depression and anxiety were administered at baseline (first time point, before training) and after completing each training condition (second time point at 6 weeks, and third time point at 12 weeks). Cognitive composite scores were calculated for participants at each time point. Study completion and protocol adherence demonstrated good feasibility of this brain training protocol in healthy middle-aged adults. Exploratory analyses suggested that brain training was associated with neurocognitive improvements related to executive attention, as well as improvements in mood. Overall, our findings suggest that brain training programs are feasible in middle-aged cohorts. We propose that brain training games may be linked to improvements in executive attention and affect by promoting cognitive self-efficacy in middle-aged adults.

  8. Emotion Regulation Profiles, Temperament, and Adjustment Problems in Preadolescents

    PubMed Central

    Zalewski, Maureen; Lengua, Liliana J.; Trancik, Anika; Wilson, Anna C.; Bazinet, Alissa

    2014-01-01

    The longitudinal relations of emotion regulation profiles to temperament and adjustment in a community sample of preadolescents (N = 196, 8–11 years at Time 1) were investigated using person-oriented latent profile analysis (LPA). Temperament, emotion regulation, and adjustment were measured at 3 different time points, with each time point occurring 1 year apart. LPA identified 5 frustration and 4 anxiety regulation profiles based on children’s physiological, behavioral, and self-reported reactions to emotion-eliciting tasks. The relation of effortful control to conduct problems was mediated by frustration regulation profiles, as was the relation of effortful control to depression. Anxiety regulation profiles did not mediate relations between temperament and adjustment. PMID:21413935

  9. ASSESSMENT of POTENTIAL CARBON DIOXIDE-BASED DEMAND CONTROL VENTILATION SYSTEM PERFORMANCE in SINGLE ZONE SYSTEMS

    DTIC Science & Technology

    2013-03-21

    and timers use a time-based estimate to predict how many people are in a facility at a given point in the day. CO2-based DCV systems measure CO2...energy and latent energy from the outside air when the coils’ surface temperature is below the dew point of the air passing over the coils (ASHRAE...model assumes that the dew point water saturation pressure is the same as the dry-bulb water vapor pressure, consistent with a typical ASHRAE

  10. Langley's CSI evolutionary model: Phase O

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith; Elliott, Kenny B.; Horta, Lucas G.; Bailey, Jim P.; Bruner, Anne M.; Sulla, Jeffrey L.; Won, John; Ugoletti, Roberto M.

    1991-01-01

    A testbed for the development of Controls Structures Interaction (CSI) technology to improve space science platform pointing is described. The evolutionary nature of the testbed will permit the study of global line-of-sight pointing in phases 0 and 1, whereas, multipayload pointing systems will be studied beginning with phase 2. The design, capabilities, and typical dynamic behavior of the phase 0 version of the CSI evolutionary model (CEM) is documented for investigator both internal and external to NASA. The model description includes line-of-sight pointing measurement, testbed structure, actuators, sensors, and real time computers, as well as finite element and state space models of major components.

  11. Using 50 years of soil radiocarbon data to identify optimal approaches for estimating soil carbon residence times

    NASA Astrophysics Data System (ADS)

    Baisden, W. T.; Canessa, S.

    2013-01-01

    In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14C to determine residence times, by estimating the amount of ‘bomb 14C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.

  12. On measuring time metaphors.

    PubMed

    Sobol-Kwapinska, Malgorzata; Oles, Piotr K

    2007-02-01

    Referring to the 2005 article by Wittmann and Lehnhoff, the problem of using time metaphors for measuring awareness of time is posed. Starting from clarification of meaning, the Metaphors Slowness scale which was not homogeneous, an alternative interpretation of the result was proposed, given that metaphors refer to two separate aspects of time speed, an ongoing passage of time, and ex post reflection on passage of time that has already passed. The former refers to the judgment of an ongoing passage of time, and the latter to the judgment of passage of past time from a particular point in the past till now. Time perception is multifaceted and perhaps ambiguous. This particular aspect of time perception is covered by a notion of a "dialectical time", when opposite aspects of time are combined, e.g., pleasant and unpleasant ones.

  13. Sensorimotor and executive function slowing in anesthesiology residents after overnight shifts.

    PubMed

    Williams, George W; Shankar, Bairavi; Klier, Eliana M; Chuang, Alice Z; El Marjiya-Villarreal, Salma; Nwokolo, Omonele O; Sharma, Aanchal; Sereno, Anne B

    2017-08-01

    Medical residents working overnight call shifts experience sleep deprivation and circadian clock disruption. This leads to deficits in sensorimotor function and increases in workplace accidents. Using quick tablet-based tasks, we investigate whether measureable executive function differences exist following a single overnight call versus routine shift, and whether factors like stress, rest and caffeine affect these measures. A prospective, observational, longitudinal, comparison study was conducted. An academic tertiary hospital's main operating room suite staffed by attending anesthesiologists, anesthesiology residents, anesthesiologist assistants and nurse anesthetists. Subjects were 30 anesthesiology residents working daytime shifts and 30 peers working overnight call shifts from the University of Texas Health Science Center at Houston. Before and after their respective work shifts, residents completed the Stanford Sleepiness Scale (SSS) and the ProPoint and AntiPoint tablet-based tasks. These latter tasks are designed to measure sensorimotor and executive functions, respectively. The SSS is a self-reported measure of sleepiness. Response times (RTs) are measured in the pointing tasks. Call residents exhibited increased RTs across their shifts (post-pre) on both ProPoint (p=0.002) and AntiPoint (p<0.002) tasks, when compared to Routine residents. Increased stress was associated with decreases in AntiPoint RT for Routine (p=0.007), but with greater increases in sleepiness for Call residents (p<0.001). Further, whether or not a Call resident consumed caffeine habitually was associated with ProPoint RT changes; with Call residents who habitually drink caffeine having a greater Pre-Post difference (i.e., more slowing, p<0.001) in ProPoint RT. These results indicate that (1) overnight Call residents demonstrate both sensorimotor and cognitive slowing compared to routine daytime shift residents, (2) sensorimotor slowing is greater in overnight Call residents who drink caffeine habitually, and (3) increased stress during a shift reduces (improves) cognitive RTs during routine daytime but not overnight call shifts. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. [Comparison of ability to humidification of inspired air through the nose and oral cavity using dew point hygrometer].

    PubMed

    Paczesny, Daniel; Rapiejko, Piotr; Weremczuk, Jerzy; Jachowicz, Ryszard; Jurkiewicz, Dariusz

    2007-01-01

    Aim of this study was to check at the hospital the dew point hygrometer for fast measurement of air humidity in upper airways. The nose ability to humidification of inspired air and partially recover moisture from expired air was evaluated. Measurements from respiration through the nose and oral cavity were compared. The study was carried out in a group of 30 people (8 female and 22 male), age group 18 to 70 (mean age: 37 years old). In 22 of the participants there were no deviation from normal state in laryngologic examination, while in 4 participants nasal septum deviation without imaired nasal; oatency was found, in other 3--nasal vonchae hyperthrophy and in 1--nasal polips (grade I). The measurements of air humidity in upper air ways was done using specially designed and constructed measurement system. The air inspired through the nose and oral cavity is humidified. For typical external conditions (T = 22 degrees C i RH = 50%) the nose humidifies inspired air two times better then oral cavity (short time range of measurement approximately 1 min). Moisture from expired air through the nose is partially recovered (for patients with regular patency is 25% of the value of humidifying of inspired air). The oral cavity does not have ability to partially recovery moisture form expired air. The paper presented fast dew point hygrometer based on semiconductor microsystems for measurement humidity in inspired and expired air through the nose and oral cavity. Presented system can be a proper instrument for evaluation of nasal functions.

  15. Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve

    NASA Astrophysics Data System (ADS)

    Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.

    2009-04-01

    Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.

  16. Method for mapping a natural gas leak

    DOEpatents

    Reichardt, Thomas A [Livermore, CA; Luong, Amy Khai [Dublin, CA; Kulp, Thomas J [Livermore, CA; Devdas, Sanjay [Albany, CA

    2009-02-03

    A system is described that is suitable for use in determining the location of leaks of gases having a background concentration. The system is a point-wise backscatter absorption gas measurement system that measures absorption and distance to each point of an image. The absorption measurement provides an indication of the total amount of a gas of interest, and the distance provides an estimate of the background concentration of gas. The distance is measured from the time-of-flight of laser pulse that is generated along with the absorption measurement light. The measurements are formatted into an image of the presence of gas in excess of the background. Alternatively, an image of the scene is superimposed on the image of the gas to aid in locating leaks. By further modeling excess gas as a plume having a known concentration profile, the present system provides an estimate of the maximum concentration of the gas of interest.

  17. Natural gas leak mapper

    DOEpatents

    Reichardt, Thomas A [Livermore, CA; Luong, Amy Khai [Dublin, CA; Kulp, Thomas J [Livermore, CA; Devdas, Sanjay [Albany, CA

    2008-05-20

    A system is described that is suitable for use in determining the location of leaks of gases having a background concentration. The system is a point-wise backscatter absorption gas measurement system that measures absorption and distance to each point of an image. The absorption measurement provides an indication of the total amount of a gas of interest, and the distance provides an estimate of the background concentration of gas. The distance is measured from the time-of-flight of laser pulse that is generated along with the absorption measurement light. The measurements are formated into an image of the presence of gas in excess of the background. Alternatively, an image of the scene is superimosed on the image of the gas to aid in locating leaks. By further modeling excess gas as a plume having a known concentration profile, the present system provides an estimate of the maximum concentration of the gas of interest.

  18. Low-cost standalone multi-sensor thermometer for long time measurements

    NASA Astrophysics Data System (ADS)

    Kumchaiseemak, Nakorn; Hormwantha, Tongchai; Wungmool, Piyachat; Suwanatus, Suchat; Kanjai, Supaporn; Lertkitthaworn, Thitima; Jutamanee, Kanapol; Luengviriya, Chaiya

    2017-09-01

    We present a portable device for long-time recording of the temperature at multiple measuring points. Thermocouple wires are utilized as the sensors attached to the objects. To minimize the production cost, the measured voltage signals are relayed via a multiplexer to a set of amplifiers and finally to a single microcontroller. The observed temperature and the corresponding date and time, obtained from a real-time clock circuit, are recorded in a memory card for further analysis. The device is powered by a rechargeable battery and placed in a rainproof container, thus it can operate under outdoor conditions. A demonstration of the device usage in a mandarin orange cultivation field of the Royal project, located in the northern Thailand, is illustrated.

  19. Multiscale analysis of heart rate dynamics: entropy and time irreversibility measures.

    PubMed

    Costa, Madalena D; Peng, Chung-Kang; Goldberger, Ary L

    2008-06-01

    Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and non-equilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools--multiscale entropy and multiscale time irreversibility--are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs.

  20. Multiscale Analysis of Heart Rate Dynamics: Entropy and Time Irreversibility Measures

    PubMed Central

    Peng, Chung-Kang; Goldberger, Ary L.

    2016-01-01

    Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and nonequilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools— multiscale entropy and multiscale time irreversibility—are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs. PMID:18172763

  1. Mass Spectrometry Using Nanomechanical Systems: Beyond the Point-Mass Approximation.

    PubMed

    Sader, John E; Hanay, M Selim; Neumann, Adam P; Roukes, Michael L

    2018-03-14

    The mass measurement of single molecules, in real time, is performed routinely using resonant nanomechanical devices. This approach models the molecules as point particles. A recent development now allows the spatial extent (and, indeed, image) of the adsorbate to be characterized using multimode measurements ( Hanay , M. S. , Nature Nanotechnol. , 10 , 2015 , pp 339 - 344 ). This "inertial imaging" capability is achieved through virtual re-engineering of the resonator's vibrating modes, by linear superposition of their measured frequency shifts. Here, we present a complementary and simplified methodology for the analysis of these inertial imaging measurements that exhibits similar performance while streamlining implementation. This development, together with the software that we provide, enables the broad implementation of inertial imaging that opens the door to a range of novel characterization studies of nanoscale adsorbates.

  2. Methods for measuring populations of small, diurnal forest birds.

    Treesearch

    D.A. Manuwal; A.B. Carey

    1991-01-01

    Before a bird population is measured, the objectives of the study should be clearly defined. Important factors to be considered in designing a study are study site selection, plot size or transect length, distance between sampling points, duration of counts, and frequency and timing of sampling. Qualified field personnel are especially important. Assumptions applying...

  3. Measuring Competence and Dysfunction in Preschool Children: Source Agreement and Component Structure

    ERIC Educational Resources Information Center

    Klyce, Daniel; Conger, Anthony J.; Conger, Judith Cohen; Dumas, Jean E.

    2011-01-01

    Agreement between parents and teachers on ratings of three domains of behaviors exhibited by preschool children and the structural relations between these domains were measured. Parents and teachers rated the behaviors of a socioeconomically diverse sample of 610 children; ratings were obtained from parents at three time points and from teachers…

  4. Lidars for smoke and dust cloud diagnostics

    NASA Astrophysics Data System (ADS)

    Fujimura, S. F.; Warren, R. E.; Lutomirski, R. F.

    1980-11-01

    An algorithm that integrates a time-resolved lidar signature for use in estimating transmittance, extinction coefficient, mass concentration, and CL values generated under battlefield conditions is applied to lidar signatures measured during the DIRT-I tests. Estimates are given for the dependence of the inferred transmittance and extinction coefficient on uncertainties in parameters such as the obscurant backscatter-to-extinction ratio. The enhanced reliability in estimating transmittance through use of a target behind the obscurant cloud is discussed. It is found that the inversion algorithm can produce reliable estimates of smoke or dust transmittance and extinction from all points within the cloud for which a resolvable signal can be detected, and that a single point calibration measurement can convert the extinction values to mass concentration for each resolvable signal point.

  5. Prediction of the Critical Curvature for LX-17 with the Time of Arrival Data from DNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jin; Fried, Laurence E.; Moss, William C.

    2017-01-10

    We extract the detonation shock front velocity, curvature and acceleration from time of arrival data measured at grid points from direct numerical simulations of a 50mm rate-stick lit by a disk-source, with the ignition and growth reaction model and a JWL equation of state calibrated for LX-17. We compute the quasi-steady (D, κ) relation based on the extracted properties and predicted the critical curvatures of LX-17. We also proposed an explicit formula that contains the failure turning point, obtained from optimization for the (D, κ) relation of LX-17.

  6. Sympathetic arousal as a marker of chronicity in childhood stuttering.

    PubMed

    Zengin-Bolatkale, Hatun; Conture, Edward G; Walden, Tedra A; Jones, Robin M

    2018-01-01

    This study investigated whether sympathetic activity during a stressful speaking task was an early marker for stuttering chronicity. Participants were 9 children with persisting stuttering, 23 children who recovered, and 17 children who do not stutter. Participants performed a stress-inducing picture-naming task and skin conductance was measured across three time points. Findings indicated that at the initial time point, children with persisting stuttering exhibited higher sympathetic arousal during the stressful speaking task than children whose stuttering recovered. Findings are taken to suggest that sympathetic activity may be an early marker of heightened risk for chronic stuttering.

  7. The Immediate Effects on Inter-rectus Distance of Abdominal Crunch and Drawing-in Exercises During Pregnancy and the Postpartum Period.

    PubMed

    Mota, Patrícia; Pascoal, Augusto Gil; Carita, Ana Isabel; Bø, Kari

    2015-10-01

    Longitudinal descriptive exploratory study. To evaluate in primigravid women the immediate effect of drawing-in and abdominal crunch exercises on inter-rectus distance (IRD), measured at 4 time points during pregnancy and in the postpartum period. There is scant knowledge of the effect of different abdominal exercises on IRD in pregnant and postpartum women. The study included 84 primiparous participants. Ultrasound images were recorded with a 12-MHz linear transducer, at rest and during abdominal drawing-in and abdominal crunch exercises, at 3 locations on the linea alba. The IRD was measured at 4 time points: gestational weeks 35 to 41, 6 to 8 weeks postpartum, 12 to 14 weeks postpartum, and 24 to 26 weeks postpartum. Separate 2-way, repeated-measures analyses of variance (ANOVAs) were performed for each exercise (drawing-in and abdominal crunch) and each measurement location to evaluate the immediate effects of exercises on IRD at each of the 4 time points. Similarly, 2-way ANOVAs were used to contrast the effects of the 2 exercises on IRD. Performing the drawing-in exercise caused a significant change in width of the IRD at 2 cm below the umbilicus, narrowing the IRD by a mean of 3.8 mm (95% confidence interval [CI]: 1.2, 6.4 mm) at gestational weeks 35 to 41, and widening the IRD by 3.0 mm (95% CI: 1.4, 4.6 mm) at 6 to 8 weeks postpartum, by 1.8 mm (95% CI: 0.6, 3.1 mm) at 12 to 14 weeks postpartum, and by 2.5 mm (95% CI: 1.4, 3.6 mm) at 24 to 26 weeks postpartum (P<.01). Performing the abdominal crunch exercise led to a significant narrowing of the IRD (P<.01) in all 3 locations at all 4 time points, with the exception of 2 cm below the umbilicus at postpartum weeks 24 to 26. The average amount of narrowing varied from 1.6 to 20.9 mm, based on time and location. Overall, there was a contrasting effect of the 2 exercises, with the abdominal crunch exercise consistently producing a significant narrowing of the IRD. In contrast, the drawing-in exercise generally led to a small widening of the IRD.

  8. Effective data validation of high-frequency data: time-point-, time-interval-, and trend-based methods.

    PubMed

    Horn, W; Miksch, S; Egghart, G; Popow, C; Paky, F

    1997-09-01

    Real-time systems for monitoring and therapy planning, which receive their data from on-line monitoring equipment and computer-based patient records, require reliable data. Data validation has to utilize and combine a set of fast methods to detect, eliminate, and repair faulty data, which may lead to life-threatening conclusions. The strength of data validation results from the combination of numerical and knowledge-based methods applied to both continuously-assessed high-frequency data and discontinuously-assessed data. Dealing with high-frequency data, examining single measurements is not sufficient. It is essential to take into account the behavior of parameters over time. We present time-point-, time-interval-, and trend-based methods for validation and repair. These are complemented by time-independent methods for determining an overall reliability of measurements. The data validation benefits from the temporal data-abstraction process, which provides automatically derived qualitative values and patterns. The temporal abstraction is oriented on a context-sensitive and expectation-guided principle. Additional knowledge derived from domain experts forms an essential part for all of these methods. The methods are applied in the field of artificial ventilation of newborn infants. Examples from the real-time monitoring and therapy-planning system VIE-VENT illustrate the usefulness and effectiveness of the methods.

  9. Assessing blood coagulation status with laser speckle rheology

    PubMed Central

    Tripathi, Markandey M.; Hajjarian, Zeinab; Van Cott, Elizabeth M.; Nadkarni, Seemantini K.

    2014-01-01

    We have developed and investigated a novel optical approach, Laser Speckle Rheology (LSR), to evaluate a patient’s coagulation status by measuring the viscoelastic properties of blood during coagulation. In LSR, a blood sample is illuminated with laser light and temporal speckle intensity fluctuations are measured using a high-speed CMOS camera. During blood coagulation, changes in the viscoelastic properties of the clot restrict Brownian displacements of light scattering centers within the sample, altering the rate of speckle intensity fluctuations. As a result, blood coagulation status can be measured by relating the time scale of speckle intensity fluctuations with clinically relevant coagulation metrics including clotting time and fibrinogen content. Our results report a close correlation between coagulation metrics measured using LSR and conventional coagulation results of activated partial thromboplastin time, prothrombin time and functional fibrinogen levels, creating the unique opportunity to evaluate a patient’s coagulation status in real-time at the point of care. PMID:24688816

  10. Subjective Measurements of In-Flight Sleep, Circadian Variation, and Their Relationship with Fatigue.

    PubMed

    van den Berg, Margo J; Wu, Lora J; Gander, Philippa H

    This study examined whether subjective measurements of in-flight sleep could be a reliable alternative to actigraphic measurements for monitoring pilot fatigue in a large-scale survey. Pilots (3-pilot crews) completed a 1-page survey on outbound and inbound long-haul flights crossing 1-7 time zones (N = 586 surveys) between 53 city pairs with 1-d layovers. Across each flight, pilots documented flight start and end times, break times, and in-flight sleep duration and quality if they attempted sleep. They also rated their fatigue (Samn-Perelli Crew Status Check) and sleepiness (Karolinska Sleepiness Scale) at top of descent (TOD). Mixed model ANCOVA was used to identify independent factors associated with sleep duration, quality, and TOD measures. Domicile time was used as a surrogate measure of circadian phase. Sleep duration increased by 10.2 min for every 1-h increase in flight duration. Sleep duration and quality varied by break start time, with significantly more sleep obtained during breaks starting between (domicile) 22:00-01:59 and 02:00-05:59 compared to earlier breaks. Pilots were more fatigued and sleepy at TOD on flights arriving between 02:00-05:59 and 06:00-09:59 domicile time compared to other flights. With every 1-h increase in sleep duration, sleepiness ratings at TOD decreased by 0.6 points and fatigue ratings decreased by 0.4 points. The present findings are consistent with previous actigraphic studies, suggesting that self-reported sleep duration is a reliable alternative to actigraphic sleep in this type of study, with use of validated measures, sufficiently large sample sizes, and where fatigue risk is expected to be low. van den Berg MJ, Wu LJ, Gander PH. Subjective measurements of in-flight sleep, circadian variation, and their relationship with fatigue. Aerosp Med Hum Perform. 2016; 87(10):869-875.

  11. Comparison of IKDC and SANE Outcome Measures Following Knee Injury in Active Female Patients

    PubMed Central

    Winterstein, Andrew P.; McGuine, Timothy A.; Carr, Kathleen E.; Hetzel, Scott J.

    2013-01-01

    Background: Knee injury among young, active female patients remains a public health issue. Clinicians are called upon to pay greater attention to patient-oriented outcomes to evaluate the impact of these injuries. Little agreement exists on which outcome measures are best, and clinicians cite several barriers to their use. Single Assessment Numerical Evaluation (SANE) may provide meaningful outcome information while lessening the time burden associated with other patient-oriented measures. Hypothesis: The SANE and International Knee Documentation Committee (IKDC) scores would be strongly correlated in a cohort of young active female patients with knee injuries from preinjury through 1-year follow-up and that a minimal clinically important difference (MCID) could be calculated for the SANE score. Study Design: Observational prospective cohort. Methods: Two hundred sixty-three subjects completed SANE and IKDC at preinjury by recall, time of injury, and 3, 6, and 12 months postinjury. Pearson correlation coefficients were used to assess the association between SANE and IKDC. Repeated-measures analysis of variance was used to determine differences in SANE and IKDC over time. MCID was calculated for SANE using IKDC MCID as an anchor. Results: Moderate to strong correlations were seen between SANE and IKDC (0.65-0.83). SANE, on average, was 2.7 (95% confidence interval, 1.5-3.9; P < 0.00) units greater than IKDC over all time points. MCID for the SANE was calculated as 7 for a 6-month follow-up and 19 for a 12-month follow-up. Conclusion: SANE scores were moderately to strongly correlated to IKDC scores across all time points. Reported MCID values for the SANE should be utilized to measure meaningful changes over time for young, active female patients with knee injuries. Clinical Relevance: Providing clinicians with patient-oriented outcome measures that can be obtained with little clinician and patient burden may allow for greater acceptance and use of outcome measures in clinical settings. PMID:24427427

  12. Brief report: a preliminary study of fetal head circumference growth in autism spectrum disorder.

    PubMed

    Whitehouse, Andrew J O; Hickey, Martha; Stanley, Fiona J; Newnham, John P; Pennell, Craig E

    2011-01-01

    Fetal head circumference (HC) growth was examined prospectively in children with autism spectrum disorder (ASD). ASD participants (N = 14) were each matched with four control participants (N = 56) on a range of parameters known to influence fetal growth. HC was measured using ultrasonography at approximately 18 weeks gestation and again at birth using a paper tape-measure. Overall body size was indexed by fetal femur-length and birth length. There was no between-groups difference in head circumference at either time-point. While a small number of children with ASD had disproportionately large head circumference relative to body size at both time-points, the between-groups difference did not reach statistical significance in this small sample. These preliminary findings suggest that further investigation of fetal growth in ASD is warranted.

  13. Low-dose caffeine administered in chewing gum does not enhance cycling to exhaustion.

    PubMed

    Ryan, Edward J; Kim, Chul-Ho; Muller, Matthew D; Bellar, David M; Barkley, Jacob E; Bliss, Matthew V; Jankowski-Wilkinson, Andrea; Russell, Morgan; Otterstetter, Ronald; Macander, Daniela; Glickman, Ellen L; Kamimori, Gary H

    2012-03-01

    Low-dose caffeine administered in chewing gum does not enhance cycling to exhaustion. The purpose of the current investigation was to examine the effect of low-dose caffeine (CAF) administered in chewing gum at 3 different time points during submaximal cycling exercise to exhaustion. Eight college-aged (26 ± 4 years), physically active (45.5 ± 5.7 ml·kg(-1)·min(-1)) volunteers participated in 4 experimental trials. Two pieces of caffeinated chewing gum (100 mg per piece, total quantity of 200 mg) were administered in a double-blind manner at 1 of 3 time points (-35, -5, and +15 minutes) with placebo at the other 2 points and at all 3 points in the control trial. The participants cycled at 85% of maximal oxygen consumption until volitional fatigue and time to exhaustion (TTE) were recorded in minutes. Venous blood samples were obtained at -40, -10, and immediately postexercise and analyzed for serum-free fatty acid and plasma catecholamine concentrations. Oxygen consumption, respiratory exchange ratio, heart rate, glucose, lactate, ratings of perceived exertion, and perceived leg pain measures were obtained at baseline and every 10 minutes during cycling. The results showed that there were no significant differences between the trials for any of the parameters measured including TTE. These findings suggest that low-dose CAF administered in chewing gum has no effect on TTE during cycling in recreational athletes and is, therefore, not recommended.

  14. Haloperidol and reduced haloperidol concentrations in plasma and red blood cells from chronic schizophrenic patients.

    PubMed

    Ko, G N; Korpi, E R; Kirch, D G

    1989-06-01

    In a double-blind, placebo-controlled study, 15 drug-free chronic schizophrenic inpatients were treated with a fixed dose of haloperidol for 6 weeks. Haloperidol and its metabolite, reduced haloperidol, were measured in plasma and red blood cells after 2, 4, and 6 weeks of treatment. Behavioral change was rated using the Brief Psychiatric Rating Scale (BPRS). Not only the raw concentrations, but also blood compartment sums and ratios of these four drug measurements were tested for their strength of association with behavioral improvement. Positive associations with some BPRS subscales at some time points emerged; however, no significant correlations were found to extend across all time points measured. There was a trend in this cohort for negative symptom improvement to be associated with the ratio of haloperidol to reduced haloperidol in red blood cells. The ratio of haloperidol to reduced haloperidol in plasma was always greater than that in the red blood cells for all patients, reflecting an accumulation of the metabolite in red blood cells.

  15. Theory of a time-dependent heat diffusion determination of thermal diffusivities with a single temperature measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez, R. B.; Carroll, R. M.; Sisman, O.

    1971-02-01

    A method to measure the thermal diffusivity of reactor fuels during irradiation is developed, based on a time-dependent heat diffusion equation. With this technique the temperature is measured at only one point in the fuel specimen. This method has the advantage that it is not necessary to know the heat generation (a difficult evaluation during irradiation). The theory includes realistic boundary conditions, applicable to actual experimental systems. The parameters are the time constants associated with the first two time modes in the temperature-vs-time curve resulting from a step change in heat input to the specimen. With the time constants andmore » the necessary material properties and dimensions of the specimen and specimen holder, the thermal diffusivity of the specimen can be calculated.« less

  16. Soil moisture optimal sampling strategy for Sentinel 1 validation super-sites in Poland

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Lipiec, Jerzy; Usowicz, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan

    2014-05-01

    Soil moisture (SM) exhibits a high temporal and spatial variability that is dependent not only on the rainfall distribution, but also on the topography of the area, physical properties of soil and vegetation characteristics. Large variability does not allow on certain estimation of SM in the surface layer based on ground point measurements, especially in large spatial scales. Remote sensing measurements allow estimating the spatial distribution of SM in the surface layer on the Earth, better than point measurements, however they require validation. This study attempts to characterize the SM distribution by determining its spatial variability in relation to the number and location of ground point measurements. The strategy takes into account the gravimetric and TDR measurements with different sampling steps, abundance and distribution of measuring points on scales of arable field, wetland and commune (areas: 0.01, 1 and 140 km2 respectively), taking into account the different status of SM. Mean values of SM were lowly sensitive on changes in the number and arrangement of sampling, however parameters describing the dispersion responded in a more significant manner. Spatial analysis showed autocorrelations of the SM, which lengths depended on the number and the distribution of points within the adopted grids. Directional analysis revealed a differentiated anisotropy of SM for different grids and numbers of measuring points. It can therefore be concluded that both the number of samples, as well as their layout on the experimental area, were reflected in the parameters characterizing the SM distribution. This suggests the need of using at least two variants of sampling, differing in the number and positioning of the measurement points, wherein the number of them must be at least 20. This is due to the value of the standard error and range of spatial variability, which show little change with the increase in the number of samples above this figure. Gravimetric method gives a more varied distribution of SM than those derived from TDR measurements. It should be noted that reducing the number of samples in the measuring grid leads to flattening the distribution of SM from both methods and increasing the estimation error at the same time. Grid of sensors for permanent measurement points should include points that have similar distributions of SM in the vicinity. Results of the analysis including number, the maximum correlation ranges and the acceptable estimation error should be taken into account when choosing of the measurement points. Adoption or possible adjustment of the distribution of the measurement points should be verified by performing additional measuring campaigns during the dry and wet periods. Presented approach seems to be appropriate for creation of regional-scale test (super) sites, to validate products of satellites equipped with SAR (Synthetic Aperture Radar), operating in C-band, with spatial resolution suited to single field scale, as for example: ERS-1, ERS-2, Radarsat and Sentinel-1, which is going to be launched in next few months. The work was partially funded by the Government of Poland through an ESA Contract under the PECS ELBARA_PD project No. 4000107897/13/NL/KML.

  17. Deformation of angle profiles in forward kinematics for nullifying end-point offset while preserving movement properties.

    PubMed

    Zhang, Xudong

    2002-10-01

    This work describes a new approach that allows an angle-domain human movement model to generate, via forward kinematics, Cartesian-space human movement representation with otherwise inevitable end-point offset nullified but much of the kinematic authenticity retained. The approach incorporates a rectification procedure that determines the minimum postural angle change at the final frame to correct the end-point offset, and a deformation procedure that deforms the angle profile accordingly to preserve maximum original kinematic authenticity. Two alternative deformation schemes, named amplitude-proportional (AP) and time-proportional (TP) schemes, are proposed and formulated. As an illustration and empirical evaluation, the proposed approach, along with two deformation schemes, was applied to a set of target-directed right-hand reaching movements that had been previously measured and modeled. The evaluation showed that both deformation schemes nullified the final frame end-point offset and significantly reduced time-averaged position errors for the end-point as well as the most distal intermediate joint while causing essentially no change in the remaining joints. A comparison between the two schemes based on time-averaged joint and end-point position errors indicated that overall the TP scheme outperformed the AP scheme. In addition, no statistically significant difference in time-averaged angle error was identified between the raw prediction and either of the deformation schemes, nor between the two schemes themselves, suggesting minimal angle-domain distortion incurred by the deformation.

  18. An in-situ measuring method for planar straightness error

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie

    2018-01-01

    According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.

  19. Toward an integrated ice core chronology using relative and orbital tie-points

    NASA Astrophysics Data System (ADS)

    Bazin, L.; Landais, A.; Lemieux-Dudon, B.; Toyé Mahamadou Kele, H.; Blunier, T.; Capron, E.; Chappellaz, J.; Fischer, H.; Leuenberger, M.; Lipenkov, V.; Loutre, M.-F.; Martinerie, P.; Parrenin, F.; Prié, F.; Raynaud, D.; Veres, D.; Wolff, E.

    2012-04-01

    Precise ice cores chronologies are essential to better understand the mechanisms linking climate change to orbital and greenhouse gases concentration forcing. A tool for ice core dating (DATICE [developed by Lemieux-Dudon et al., 2010] permits to generate a common time-scale integrating relative and absolute dating constraints on different ice cores, using an inverse method. Nevertheless, this method has only been applied for a 4-ice cores scenario and for the 0-50 kyr time period. Here, we present the bases for an extension of this work back to 800 ka using (1) a compilation of published and new relative and orbital tie-points obtained from measurements of air trapped in ice cores and (2) an adaptation of the DATICE inputs to 5 ice cores for the last 800 ka. We first present new measurements of δ18Oatm and δO2/N2 on the Talos Dome and EPICA Dome C (EDC) ice cores with a particular focus on Marine Isotopic Stages (MIS) 5, and 11. Then, we show two tie-points compilations. The first one is based on new and published CH4 and δ18Oatm measurements on 5 ice cores (NorthGRIP, EPICA Dronning Maud Land, EDC, Talos Dome and Vostok) in order to produce a table of relative gas tie-points over the last 400 ka. The second one is based on new and published records of δO2/N2, δ18Oatm and air content to provide a table of orbital tie-points over the last 800 ka. Finally, we integrate the different dating constraints presented above in the DATICE tool adapted to 5 ice cores to cover the last 800 ka and show how these constraints compare with the established gas chronologies of each ice core.

  20. Volatile organic compound emissions from the oil and natural gas industry in the Uinta Basin, Utah: point sources compared to ambient air composition

    NASA Astrophysics Data System (ADS)

    Warneke, C.; Geiger, F.; Edwards, P. M.; Dube, W.; Pétron, G.; Kofler, J.; Zahn, A.; Brown, S. S.; Graus, M.; Gilman, J.; Lerner, B.; Peischl, J.; Ryerson, T. B.; de Gouw, J. A.; Roberts, J. M.

    2014-05-01

    The emissions of volatile organic compounds (VOCs) associated with oil and natural gas production in the Uinta Basin, Utah were measured at a ground site in Horse Pool and from a NOAA mobile laboratory with PTR-MS instruments. The VOC compositions in the vicinity of individual gas and oil wells and other point sources such as evaporation ponds, compressor stations and injection wells are compared to the measurements at Horse Pool. High mixing ratios of aromatics, alkanes, cycloalkanes and methanol were observed for extended periods of time and short-term spikes caused by local point sources. The mixing ratios during the time the mobile laboratory spent on the well pads were averaged. High mixing ratios were found close to all point sources, but gas wells using dry-gas collection, which means dehydration happens at the well, were clearly associated with higher mixing ratios than other wells. Another large source was the flowback pond near a recently hydraulically re-fractured gas well. The comparison of the VOC composition of the emissions from the oil and natural gas wells showed that wet gas collection wells compared well with the majority of the data at Horse Pool and that oil wells compared well with the rest of the ground site data. Oil wells on average emit heavier compounds than gas wells. The mobile laboratory measurements confirm the results from an emissions inventory: the main VOC source categories from individual point sources are dehydrators, oil and condensate tank flashing and pneumatic devices and pumps. Raw natural gas is emitted from the pneumatic devices and pumps and heavier VOC mixes from the tank flashings.

  1. POLARBEAR constraints on cosmic birefringence and primordial magnetic fields

    DOE PAGES

    Ade, Peter A. R.; Arnold, Kam; Atlas, Matt; ...

    2015-12-08

    Here, we constrain anisotropic cosmic birefringence using four-point correlations of even-parity E-mode and odd-parity B-mode polarization in the cosmic microwave background measurements made by the POLARization of the Background Radiation (POLARBEAR) experiment in its first season of observations. We find that the anisotropic cosmic birefringence signal from any parity-violating processes is consistent with zero. The Faraday rotation from anisotropic cosmic birefringence can be compared with the equivalent quantity generated by primordial magnetic fields if they existed. The POLARBEAR nondetection translates into a 95% confidence level (C.L.) upper limit of 93 nanogauss (nG) on the amplitude of an equivalent primordial magneticmore » field inclusive of systematic uncertainties. This four-point correlation constraint on Faraday rotation is about 15 times tighter than the upper limit of 1380 nG inferred from constraining the contribution of Faraday rotation to two-point correlations of B-modes measured by Planck in 2015. Metric perturbations sourced by primordial magnetic fields would also contribute to the B-mode power spectrum. Using the POLARBEAR measurements of the B-mode power spectrum (two-point correlation), we set a 95% C.L. upper limit of 3.9 nG on primordial magnetic fields assuming a flat prior on the field amplitude. This limit is comparable to what was found in the Planck 2015 two-point correlation analysis with both temperature and polarization. Finally, we perform a set of systematic error tests and find no evidence for contamination. This work marks the first time that anisotropic cosmic birefringence or primordial magnetic fields have been constrained from the ground at subdegree scales.« less

  2. Measuring Diameters Of Large Vessels

    NASA Technical Reports Server (NTRS)

    Currie, James R.; Kissel, Ralph R.; Oliver, Charles E.; Smith, Earnest C.; Redmon, John W., Sr.; Wallace, Charles C.; Swanson, Charles P.

    1990-01-01

    Computerized apparatus produces accurate results quickly. Apparatus measures diameter of tank or other large cylindrical vessel, without prior knowledge of exact location of cylindrical axis. Produces plot of inner circumference, estimate of true center of vessel, data on radius, diameter of best-fit circle, and negative and positive deviations of radius from circle at closely spaced points on circumference. Eliminates need for time-consuming and error-prone manual measurements.

  3. Reliable clinical serum analysis with reusable electrochemical sensor: Toward point-of-care measurement of the antipsychotic medication clozapine.

    PubMed

    Kang, Mijeong; Kim, Eunkyoung; Winkler, Thomas E; Banis, George; Liu, Yi; Kitchen, Christopher A; Kelly, Deanna L; Ghodssi, Reza; Payne, Gregory F

    2017-09-15

    Clozapine is one of the most promising medications for managing schizophrenia but it is under-utilized because of the challenges of maintaining serum levels in a safe therapeutic range (1-3μM). Timely measurement of serum clozapine levels has been identified as a barrier to the broader use of clozapine, which is however challenging due to the complexity of serum samples. We demonstrate a robust and reusable electrochemical sensor with graphene-chitosan composite for rapidly measuring serum levels of clozapine. Our electrochemical measurements in clinical serum from clozapine-treated and clozapine-untreated schizophrenia groups are well correlated to centralized laboratory analysis for the readily detected uric acid and for the clozapine which is present at 100-fold lower concentration. The benefits of our electrochemical measurement approach for serum clozapine monitoring are: (i) rapid measurement (≈20min) without serum pretreatment; (ii) appropriate selectivity and sensitivity (limit of detection 0.7μM); (iii) reusability of an electrode over several weeks; and (iv) rapid reliability testing to detect common error-causing problems. This simple and rapid electrochemical approach for serum clozapine measurements should provide clinicians with the timely point-of-care information required to adjust dosages and personalize the management of schizophrenia. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Time-resolved stimulated emission depletion and energy transfer dynamics in two-photon excited EGFP.

    PubMed

    Masters, T A; Robinson, N A; Marsh, R J; Blacker, T S; Armoogum, D A; Larijani, B; Bain, A J

    2018-04-07

    Time and polarization-resolved stimulated emission depletion (STED) measurements are used to investigate excited state evolution following the two-photon excitation of enhanced green fluorescent protein (EGFP). We employ a new approach for the accurate STED measurement of the hitherto unmeasured degree of hexadecapolar transition dipole moment alignment α 40 present at a given excitation-depletion (pump-dump) pulse separation. Time-resolved polarized fluorescence measurements as a function of pump-dump delay reveal the time evolution of α 40 to be considerably more rapid than predicted for isotropic rotational diffusion in EGFP. Additional depolarization by homo-Förster resonance energy transfer is investigated for both α 20 (quadrupolar) and α 40 transition dipole alignments. These results point to the utility of higher order dipole correlation measurements in the investigation of resonance energy transfer processes.

  5. Real-time scheduling faces operational challenges.

    PubMed

    2005-01-01

    Online real-time patient scheduling presents a number of challenges. But a few advanced organizations are rolling out systems slowly, meeting those challenges as they go. And while this application is still too new to provide measurable benefits, anecdotal information seems to point to improvements in efficiency, patient satisfaction, and possibly quality of care.

  6. Is Plagiarism Changing over Time? A 10-Year Time-Lag Study with Three Points of Measurement

    ERIC Educational Resources Information Center

    Curtis, Guy J.; Vardanega, Lucia

    2016-01-01

    Are more students cheating on assessment tasks in higher education? Despite ongoing media speculation concerning increased "copying and pasting" and ghostwritten assignments produced by "paper mills", few studies have charted historical trends in rates and types of plagiarism. Additionally, there has been little comment from…

  7. Perspectives for Planning.

    ERIC Educational Resources Information Center

    Miner, Norris

    The operations of an institution can be viewed from three perspectives: (1) the "actual operating measurement" such as income and expenditures of a cost center at a point in time; (2) the "criterion" which reflects the established policy for a time period; and (3) the "efficiency level" wherein a balance between input and output is defined.…

  8. Processing Speed Measures as Clinical Markers for Children with Language Impairment

    ERIC Educational Resources Information Center

    Park, Jisook; Miller, Carol A.; Mainela-Arnold, Elina

    2015-01-01

    Purpose: This study investigated the relative utility of linguistic and nonlinguistic processing speed tasks as predictors of language impairment (LI) in children across 2 time points. Method: Linguistic and nonlinguistic reaction time data, obtained from 131 children (89 children with typical development [TD] and 42 children with LI; 74 boys and…

  9. Multi-time scale analysis of the spatial representativeness of in situ soil moisture data within satellite footprints

    USDA-ARS?s Scientific Manuscript database

    We conduct a novel comprehensive investigation that seeks to prove the connection between spatial and time scales in surface soil moisture (SM) within the satellite footprint (~50 km). Modeled and measured point series at Yanco and Little Washita in situ networks are first decomposed into anomalies ...

  10. Serial measurement of serum cytokines, cytokine receptors and neopterin in leprosy patients with reversal reactions.

    PubMed

    Faber, W R; Iyer, A M; Fajardo, T T; Dekker, T; Villahermosa, L G; Abalos, R M; Das, P K

    2004-09-01

    Serum levels of cytokines (IL-4, IL-5, IFN-gamma, TNF-alpha), cytokine receptors (TNFR I and II) and one monokine (neopterin) were estimated in seven leprosy patients to establish disease associated markers for reversal reactions (RR). Sera were collected at diagnosis of leprosy, at the onset of reversal reaction and at different time points during and at the end of prednisone treatment of reactions. It was expected that the serum cytokine and monokine profile before and at different time points during reactions would provide guidelines for the diagnosis and monitoring of reversal reactions in leprosy. The cytokines and cytokine receptors were measured by ELISA, whereas a radioimmunoassay was used for neopterin measurement. Six of the seven patients showed increased levels of neopterin either at the onset of RR or 1 month thereafter, and levels declined on prednisone treatment to that seen at the time of diagnosis without reactions. No consistent disease associated cytokine profile was observed in these patients. Interestingly, serum TNF-alpha levels were increased in the same patients even after completion of prednisone treatment, indicating ongoing immune activity. In conclusion, this study demonstrates that despite cytokines levels in leprosy serum being inconsistent in relation to reversal reactions, serum neopterin measurement appears to be an useful biomarker in monitoring RR patients during corticosteroid therapy.

  11. Quantification and Compensation of Eddy-Current-Induced Magnetic Field Gradients

    PubMed Central

    Spees, William M.; Buhl, Niels; Sun, Peng; Ackerman, Joseph J.H.; Neil, Jeffrey J.; Garbow, Joel R.

    2011-01-01

    Two robust techniques for quantification and compensation of eddy-current-induced magnetic-field gradients and static magnetic-field shifts (ΔB0) in MRI systems are described. Purpose-built 1-D or 6-point phantoms are employed. Both procedures involve measuring the effects of a prior magnetic-field-gradient test pulse on the phantom’s free induction decay (FID). Phantom-specific analysis of the resulting FID data produces estimates of the time-dependent, eddy-current-induced magnetic field gradient(s) and ΔB0 shift. Using Bayesian methods, the time dependencies of the eddy-current-induced decays are modeled as sums of exponentially decaying components, each defined by an amplitude and time constant. These amplitudes and time constants are employed to adjust the scanner’s gradient pre-emphasis unit and eliminate undesirable eddy-current effects. Measurement with the six-point sample phantom allows for simultaneous, direct estimation of both on-axis and cross-term eddy-current-induced gradients. The two methods are demonstrated and validated on several MRI systems with actively-shielded gradient coil sets. PMID:21764614

  12. Quantification and compensation of eddy-current-induced magnetic-field gradients.

    PubMed

    Spees, William M; Buhl, Niels; Sun, Peng; Ackerman, Joseph J H; Neil, Jeffrey J; Garbow, Joel R

    2011-09-01

    Two robust techniques for quantification and compensation of eddy-current-induced magnetic-field gradients and static magnetic-field shifts (ΔB0) in MRI systems are described. Purpose-built 1-D or six-point phantoms are employed. Both procedures involve measuring the effects of a prior magnetic-field-gradient test pulse on the phantom's free induction decay (FID). Phantom-specific analysis of the resulting FID data produces estimates of the time-dependent, eddy-current-induced magnetic field gradient(s) and ΔB0 shift. Using Bayesian methods, the time dependencies of the eddy-current-induced decays are modeled as sums of exponentially decaying components, each defined by an amplitude and time constant. These amplitudes and time constants are employed to adjust the scanner's gradient pre-emphasis unit and eliminate undesirable eddy-current effects. Measurement with the six-point sample phantom allows for simultaneous, direct estimation of both on-axis and cross-term eddy-current-induced gradients. The two methods are demonstrated and validated on several MRI systems with actively-shielded gradient coil sets. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. At the Tipping Point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiley, H. S.

    There comes a time in every field of science when things suddenly change. While it might not be immediately apparent that things are different, a tipping point has occurred. Biology is now at such a point. The reason is the introduction of high-throughput genomics-based technologies. I am not talking about the consequences of the sequencing of the human genome (and every other genome within reach). The change is due to new technologies that generate an enormous amount of data about the molecular composition of cells. These include proteomics, transcriptional profiling by sequencing, and the ability to globally measure microRNAs andmore » post-translational modifications of proteins. These mountains of digital data can be mapped to a common frame of reference: the organism’s genome. With the new high-throughput technologies, we can generate tens of thousands of data points from each sample. Data are now measured in terabytes and the time necessary to analyze data can now require years. Obviously, we can’t wait to interpret the data fully before the next experiment. In fact, we might never be able to even look at all of it, much less understand it. This volume of data requires sophisticated computational and statistical methods for its analysis and is forcing biologists to approach data interpretation as a collaborative venture.« less

  14. Estimation of the quantification uncertainty from flow injection and liquid chromatography transient signals in inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.

    2004-06-01

    The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.

  15. Preclinical evaluation of spatial frequency domain-enabled wide-field quantitative imaging for enhanced glioma resection

    NASA Astrophysics Data System (ADS)

    Sibai, Mira; Fisher, Carl; Veilleux, Israel; Elliott, Jonathan T.; Leblond, Frederic; Roberts, David W.; Wilson, Brian C.

    2017-07-01

    5-Aminolevelunic acid-induced protoporphyrin IX (PpIX) fluorescence-guided resection (FGR) enables maximum safe resection of glioma by providing real-time tumor contrast. However, the subjective visual assessment and the variable intrinsic optical attenuation of tissue limit this technique to reliably delineating only high-grade tumors that display strong fluorescence. We have previously shown, using a fiber-optic probe, that quantitative assessment using noninvasive point spectroscopic measurements of the absolute PpIX concentration in tissue further improves the accuracy of FGR, extending it to surgically curable low-grade glioma. More recently, we have shown that implementing spatial frequency domain imaging with a fluorescent-light transport model enables recovery of two-dimensional images of [PpIX], alleviating the need for time-consuming point sampling of the brain surface. We present first results of this technique modified for in vivo imaging on an RG2 rat brain tumor model. Despite the moderate errors in retrieving the absorption and reduced scattering coefficients in the subdiffusive regime of 14% and 19%, respectively, the recovered [PpIX] maps agree within 10% of the point [PpIX] values measured by the fiber-optic probe, validating its potential as an extension or an alternative to point sampling during glioma resection.

  16. SU-F-T-328: Real-Time in Vivo Dosimetry of Prostate SBRT Boost Treatments Using MOSkin Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Legge, K; O’Connor, D J; Cutajar, D

    Purpose: To provide in vivo measurements of dose to the anterior rectal wall during prostate SBRT boost treatments using MOSFET detectors. Methods: Dual MOSkin detectors were attached to a Rectafix rectal sparing device and inserted into patients during SBRT boost treatments. Patients received two boost fractions, each of 9.5–10 Gy and delivered using 2 VMAT arcs. Measurements were acquired for 12 patients. MOSFET voltages were read out at 1 Hz during delivery and converted to dose. MV images were acquired at known frequency during treatment so that the position of the gantry at each point in time was known. Themore » cumulative dose at the MOSFET location was extracted from the treatment planning system at in 5.2° increments (FF beams) or at 5 points during each delivered arc (FFF beams). The MOSFET dose and planning system dose throughout the entirety of each arc were then compared using root mean square error normalised to the final planned dose for each arc. Results: The average difference between MOSFET measured and planning system doses determined over the entire course of treatment was 9.7% with a standard deviation of 3.6%. MOSFETs measured below the planned dose in 66% of arcs measured. Uncertainty in the position of the MOSFET detector and verification point are major sources of discrepancy, as the detector is placed in a high dose gradient region during treatment. Conclusion: MOSkin detectors were able to provide real time in vivo measurements of anterior rectal wall dose during prostate SBRT boost treatments. This method could be used to verify Rectafix positioning and treatment delivery. Further developments could enable this method to be used during high dose treatments to monitor dose to the rectal wall to ensure it remains at safe levels. Funding has been provided by the University of Newcastle. Kimberley Legge is the recipient of an Australian Postgraduate Award.« less

  17. Residual settlements detection of ocean reclaimed lands with multi-platform SAR time series and SBAS technique: a case study of Shanghai Pudong International Airport

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Yang, Tianliang; Zhao, Qing; Pepe, Antonio; Dong, Hongbin; Sun, Zhibin

    2017-09-01

    Shanghai Pudong International airport is one of the three major international airports in China. The airport is located at the Yangtze estuary which is a sensitive belt of sea and land interaction region. The majority of the buildings and facilities in the airport are built on ocean-reclaimed lands and silt tidal flat. Residual ground settlement could probably occur after the completion of the airport construction. The current status of the ground settlement of the airport and whether it is within a safe range are necessary to be investigated. In order to continuously monitor the ground settlement of the airport, two Synthetic Aperture Radar (SAR) time series, acquired by X-band TerraSAR-X (TSX) and TanDEM-X (TDX) sensors from December 2009 to December 2010 and from April 2013 to July 2015, were used for analyzing with SBAS technique. We firstly obtained ground deformation measurement of each SAR subset. Both of the measurements show that obvious ground subsidence phenomenon occurred at the airport, especially in the second runway, the second terminal, the sixth cargo plane and the eighth apron. The maximum vertical ground deformation rates of both SAR subset measurements were greater than -30 mm/year, while the cumulative ground deformations reached up to -30 mm and -35 mm respectively. After generation of SBAS-retrieved ground deformation for each SAR subset, we performed a joint analysis to combine time series of each common coherent point by applying a geotechnical model. The results show that three centralized areas of ground deformation existed in the airport, mainly distributed in the sixth cargo plane, the fifth apron and the fourth apron, The maximum vertical cumulative ground subsidence was more than -70 mm. In addition, by analyzing the combined time series of four selected points, we found that the ground deformation rates of the points located at the second runway, the third runway, and the second terminal, were progressively smaller as time goes by. It indicates that the stabilities of the foundation around these points were gradually enhanced.

  18. Development of a novel multi-point plastic scintillation detector with a single optical transmission line for radiation dose measurement*

    PubMed Central

    Therriault-Proulx, François; Archambault, Louis; Beaulieu, Luc; Beddar, Sam

    2013-01-01

    Purpose The goal of this study was to develop a novel multi-point plastic scintillation detector (mPSD) capable of measuring the dose accurately at multiple positions simultaneously using a single optical transmission line. Methods A 2-point mPSD used a band-pass approach that included splitters, color filters, and an EMCCD camera. The 3-point mPSD was based on a new full-spectrum approach, in which a spectrograph was coupled to a CCD camera. Irradiations of the mPSDs and of an ion chamber were performed with a 6-MV photon beam at various depths and lateral positions in a water tank. Results For the 2-point mPSD, the average relative differences between mPSD and ion chamber measurements for the depth-dose were 2.4±1.6% and 1.3±0.8% for BCF-60 and BCF-12, respectively. For the 3-point mPSD, the average relative differences over all conditions were 2.3±1.1%, 1.6±0.4%, and 0.32±0.19% for BCF-60, BCF-12, and BCF-10, respectively. Conclusions This study demonstrates the practical feasibility of mPSDs. This type of detector could be very useful for pre-treatment quality assurance applications as well as an accurate tool for real-time in vivo dosimetry. PMID:23060069

  19. Evaluation of articular cartilage in patients with femoroacetabular impingement (FAI) using T2* mapping at different time points at 3.0 Tesla MRI: a feasibility study.

    PubMed

    Apprich, S; Mamisch, T C; Welsch, G H; Bonel, H; Siebenrock, K A; Kim, Y-J; Trattnig, S; Dudda, M

    2012-08-01

    To define the feasibility of utilizing T2* mapping for assessment of early cartilage degeneration prior to surgery in patients with symptomatic femoroacetabular impingement (FAI), we compared cartilage of the hip joint in patients with FAI and healthy volunteers using T2* mapping at 3.0 Tesla over time. Twenty-two patients (13 females and 9 males; mean age 28.1 years) with clinical signs of FAI and Tönnis grade ≤ 1 on anterior-posterior x-ray and 35 healthy age-matched volunteers were examined at a 3 T MRI using a flexible body coil. T2* maps were calculated from sagittal- and coronal-oriented gradient-multi-echo sequences using six echoes (TR 125, TE 4.41/8.49/12.57/16.65/20.73/24.81, scan time 4.02 min), both measured at beginning and end of the scan (45 min time span between measurements). Region of interest analysis was manually performed on four consecutive slices for superior and anterior cartilage. Mean T2* values were compared among patients and volunteers, as well as over time using analysis of variance and Student's t-test. Whereas quantitative T2* values for the first measurement did not reveal significant differences between patients and volunteers, either for sagittal (p = 0.644) or coronal images (p = 0.987), at the first measurement, a highly significant difference (p ≤ 0.004) was found for both measurements with time after unloading of the joint. Over time we found decreasing mean T2* values for patients, in contrast to increasing mean T2* relaxation times in volunteers. The study proved the feasibility of utilizing T2* mapping for assessment of early cartilage degeneration in the hip joint in FAI patients at 3 Tesla to predict possible success of joint-preserving surgery. However, we suggest the time point for measuring T2* as an MR biomarker for cartilage and the changes in T2* over time to be of crucial importance for designing an MR protocol in patients with FAI.

  20. Response to depression treatment in the Aging Brain Care Medical Home model.

    PubMed

    LaMantia, Michael A; Perkins, Anthony J; Gao, Sujuan; Austrom, Mary G; Alder, Cathy A; French, Dustin D; Litzelman, Debra K; Cottingham, Ann H; Boustani, Malaz A

    2016-01-01

    To evaluate the effect of the Aging Brain Care (ABC) Medical Home program's depression module on patients' depression severity measurement over time. Retrospective chart review. Public hospital system. Patients enrolled in the ABC Medical Home program between October 1, 2012 and March 31, 2014. The response of 773 enrolled patients who had multiple patient health questionnaire-9 (PHQ-9) scores recorded in the ABC Medical Home program's depression care protocol was evaluated. Repeatedly measured PHQ-9 change scores were the dependent variables in the mixed effects models, and demographic and comorbid medical conditions were tested as potential independent variables while including random effects for time and intercept. Among those patients with baseline PHQ-9 scores >10, there was a significant decrease in PHQ-9 scores over time ( P <0.001); however, the effect differed by gender ( P =0.015). On average, women's scores (4.5 point drop at 1 month) improved faster than men's scores (1 point drop at 1 month). Moreover, both men and women had a predicted drop of 7 points (>50% decline from baseline) on the PHQ-9 at 6 months. These analyses demonstrate evidence for the sustained effectiveness of the ABC Medical Home program at inducing depression remission outcomes while employing clinical staff who required less formal training than earlier clinical trials.

  1. Caesium-137 and strontium-90 temporal series in the Tagus River: experimental results and a modelling study.

    PubMed

    Miró, Conrado; Baeza, Antonio; Madruga, María J; Periañez, Raul

    2012-11-01

    The objective of this work consisted of analysing the spatial and temporal evolution of two radionuclide concentrations in the Tagus River. Time-series analysis techniques and numerical modelling have been used in this study. (137)Cs and (90)Sr concentrations have been measured from 1994 to 1999 at several sampling points in Spain and Portugal. These radionuclides have been introduced into the river by the liquid releases from several nuclear power plants in Spain, as well as from global fallout. Time-series analysis techniques have allowed the determination of radionuclide transit times along the river, and have also pointed out the existence of temporal cycles of radionuclide concentrations at some sampling points, which are attributed to water management in the reservoirs placed along the Tagus River. A stochastic dispersion model, in which transport with water, radioactive decay and water-sediment interactions are solved through Monte Carlo methods, has been developed. Model results are, in general, in reasonable agreement with measurements. The model has finally been applied to the calculation of mean ages of radioactive content in water and sediments in each reservoir. This kind of model can be a very useful tool to support the decision-making process after an eventual emergency situation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Granzyme B ELISPOT assay to measure influenza-specific cellular immunity.

    PubMed

    Salk, Hannah M; Haralambieva, Iana H; Ovsyannikova, Inna G; Goergen, Krista M; Poland, Gregory A

    2013-12-15

    The immunogenicity and efficacy of influenza vaccination are markedly lower in the elderly. Granzyme B (GrzB), quantified in fresh cell lysates, has been suggested to be a marker of cytotoxic T lymphocyte (CTL) response and a predictor of influenza illness among vaccinated older individuals. We have developed an influenza-specific GrzB ELISPOT assay using cryopreserved PBMCs. This method was tested on 106 healthy older subjects (ages 50-74) at baseline (Day 0) and three additional time points post-vaccination (Day 3, Day 28, Day 75) with influenza A/H1N1-containing vaccine. No significant difference was seen in GrzB response between any of the time points, although influenza-specific GrzB response appears to be elevated at all post-vaccination time points. There was no correlation between GrzB response and hemagglutination inhibition (HAI) titers, indicating no relationship between the cytolytic activity and humoral antibody levels in this cohort. Additionally, a significant negative correlation between GrzB response and age was observed. These results reveal a reduction in influenza-specific GrzB response as one ages. In conclusion, we have developed and optimized an influenza-specific ELISPOT assay for use with frozen cells to quantify the CTL-specific serine protease GrzB, as a measure of cellular immunity after influenza vaccination. © 2013.

  3. Space-Time Point Pattern Analysis of Flavescence Dorée Epidemic in a Grapevine Field: Disease Progression and Recovery

    PubMed Central

    Maggi, Federico; Bosco, Domenico; Galetto, Luciana; Palmano, Sabrina; Marzachì, Cristina

    2017-01-01

    Analyses of space-time statistical features of a flavescence dorée (FD) epidemic in Vitis vinifera plants are presented. FD spread was surveyed from 2011 to 2015 in a vineyard of 17,500 m2 surface area in the Piemonte region, Italy; count and position of symptomatic plants were used to test the hypothesis of epidemic Complete Spatial Randomness and isotropicity in the space-time static (year-by-year) point pattern measure. Space-time dynamic (year-to-year) point pattern analyses were applied to newly infected and recovered plants to highlight statistics of FD progression and regression over time. Results highlighted point patterns ranging from disperse (at small scales) to aggregated (at large scales) over the years, suggesting that the FD epidemic is characterized by multiscale properties that may depend on infection incidence, vector population, and flight behavior. Dynamic analyses showed moderate preferential progression and regression along rows. Nearly uniform distributions of direction and negative exponential distributions of distance of newly symptomatic and recovered plants relative to existing symptomatic plants highlighted features of vector mobility similar to Brownian motion. These evidences indicate that space-time epidemics modeling should include environmental setting (e.g., vineyard geometry and topography) to capture anisotropicity as well as statistical features of vector flight behavior, plant recovery and susceptibility, and plant mortality. PMID:28111581

  4. Survival of Salmonella enterica in poultry feed is strain dependent

    PubMed Central

    Andino, Ana; Pendleton, Sean; Zhang, Nan; Chen, Wei; Critzer, Faith; Hanning, Irene

    2014-01-01

    Feed components have low water activity, making bacterial survival difficult. The mechanisms of Salmonella survival in feed and subsequent colonization of poultry are unknown. The purpose of this research was to compare the ability of Salmonella serovars and strains to survive in broiler feed and to evaluate molecular mechanisms associated with survival and colonization by measuring the expression of genes associated with colonization (hilA, invA) and survival via fatty acid synthesis (cfa, fabA, fabB, fabD). Feed was inoculated with 1 of 15 strains of Salmonella enterica consisting of 11 serovars (Typhimurium, Enteriditis, Kentucky, Seftenburg, Heidelberg, Mbandanka, Newport, Bairely, Javiana, Montevideo, and Infantis). To inoculate feed, cultures were suspended in PBS and survival was evaluated by plating samples onto XLT4 agar plates at specific time points (0 h, 4 h, 8 h, 24 h, 4 d, and 7 d). To evaluate gene expression, RNA was extracted from the samples at the specific time points (0, 4, 8, and 24 h) and gene expression measured with real-time PCR. The largest reduction in Salmonella occurred at the first and third sampling time points (4 h and 4 d) with the average reductions being 1.9 and 1.6 log cfu per g, respectively. For the remaining time points (8 h, 24 h, and 7 d), the average reduction was less than 1 log cfu per g (0.6, 0.4, and 0.6, respectively). Most strains upregulated cfa (cyclopropane fatty acid synthesis) within 8 h, which would modify the fluidity of the cell wall to aid in survival. There was a weak negative correlation between survival and virulence gene expression indicating downregulation to focus energy on other gene expression efforts such as survival-related genes. These data indicate the ability of strains to survive over time in poultry feed was strain dependent and that upregulation of cyclopropane fatty acid synthesis and downregulation of virulence genes were associated with a response to desiccation stress. PMID:24570467

  5. Survival of Salmonella enterica in poultry feed is strain dependent.

    PubMed

    Andino, Ana; Pendleton, Sean; Zhang, Nan; Chen, Wei; Critzer, Faith; Hanning, Irene

    2014-02-01

    Feed components have low water activity, making bacterial survival difficult. The mechanisms of Salmonella survival in feed and subsequent colonization of poultry are unknown. The purpose of this research was to compare the ability of Salmonella serovars and strains to survive in broiler feed and to evaluate molecular mechanisms associated with survival and colonization by measuring the expression of genes associated with colonization (hilA, invA) and survival via fatty acid synthesis (cfa, fabA, fabB, fabD). Feed was inoculated with 1 of 15 strains of Salmonella enterica consisting of 11 serovars (Typhimurium, Enteriditis, Kentucky, Seftenburg, Heidelberg, Mbandanka, Newport, Bairely, Javiana, Montevideo, and Infantis). To inoculate feed, cultures were suspended in PBS and survival was evaluated by plating samples onto XLT4 agar plates at specific time points (0 h, 4 h, 8 h, 24 h, 4 d, and 7 d). To evaluate gene expression, RNA was extracted from the samples at the specific time points (0, 4, 8, and 24 h) and gene expression measured with real-time PCR. The largest reduction in Salmonella occurred at the first and third sampling time points (4 h and 4 d) with the average reductions being 1.9 and 1.6 log cfu per g, respectively. For the remaining time points (8 h, 24 h, and 7 d), the average reduction was less than 1 log cfu per g (0.6, 0.4, and 0.6, respectively). Most strains upregulated cfa (cyclopropane fatty acid synthesis) within 8 h, which would modify the fluidity of the cell wall to aid in survival. There was a weak negative correlation between survival and virulence gene expression indicating downregulation to focus energy on other gene expression efforts such as survival-related genes. These data indicate the ability of strains to survive over time in poultry feed was strain dependent and that upregulation of cyclopropane fatty acid synthesis and downregulation of virulence genes were associated with a response to desiccation stress.

  6. Effect of agitation time on nutrient distribution in full-scale CSTR biogas digesters.

    PubMed

    Kress, Philipp; Nägele, Hans-Joachim; Oechsner, Hans; Ruile, Stephan

    2018-01-01

    The aim of this work was to study the impact of reduced mixing time in a full-scale CSTR biogas reactor from 10 to 5 and to 2min in half an hour on the distribution of DM, acetic acid and FOS/TAC as a measure to cut electricity consumption. The parameters in the digestate were unevenly distributed with the highest concentration measured at the point of feeding. By reducing mixing time, the FOS/TAC value increases by 16.6%. A reduced mixing time of 2min lead to an accumulation of 15% biogas in the digestate. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A new method to compare statistical tree growth curves: the PL-GMANOVA model and its application with dendrochronological data.

    PubMed

    Ricker, Martin; Peña Ramírez, Víctor M; von Rosen, Dietrich

    2014-01-01

    Growth curves are monotonically increasing functions that measure repeatedly the same subjects over time. The classical growth curve model in the statistical literature is the Generalized Multivariate Analysis of Variance (GMANOVA) model. In order to model the tree trunk radius (r) over time (t) of trees on different sites, GMANOVA is combined here with the adapted PL regression model Q = A · T+E, where for b ≠ 0 : Q = Ei[-b · r]-Ei[-b · r1] and for b = 0 : Q  = Ln[r/r1], A =  initial relative growth to be estimated, T = t-t1, and E is an error term for each tree and time point. Furthermore, Ei[-b · r]  = ∫(Exp[-b · r]/r)dr, b = -1/TPR, with TPR being the turning point radius in a sigmoid curve, and r1 at t1 is an estimated calibrating time-radius point. Advantages of the approach are that growth rates can be compared among growth curves with different turning point radiuses and different starting points, hidden outliers are easily detectable, the method is statistically robust, and heteroscedasticity of the residuals among time points is allowed. The model was implemented with dendrochronological data of 235 Pinus montezumae trees on ten Mexican volcano sites to calculate comparison intervals for the estimated initial relative growth A. One site (at the Popocatépetl volcano) stood out, with A being 3.9 times the value of the site with the slowest-growing trees. Calculating variance components for the initial relative growth, 34% of the growth variation was found among sites, 31% among trees, and 35% over time. Without the Popocatépetl site, the numbers changed to 7%, 42%, and 51%. Further explanation of differences in growth would need to focus on factors that vary within sites and over time.

  8. Influence of taekwondo as security martial arts training on anaerobic threshold, cardiorespiratory fitness, and blood lactate recovery.

    PubMed

    Kim, Dae-Young; Seo, Byoung-Do; Choi, Pan-Am

    2014-04-01

    [Purpose] This study was conducted to determine the influence of Taekwondo as security martial arts training on anaerobic threshold, cardiorespiratory fitness, and blood lactate recovery. [Subjects and Methods] Fourteen healthy university students were recruited and divided into an exercise group and a control group (n = 7 in each group). The subjects who participated in the experiment were subjected to an exercise loading test in which anaerobic threshold, value of ventilation, oxygen uptake, maximal oxygen uptake, heart rate, and maximal values of ventilation / heart rate were measured during the exercise, immediately after maximum exercise loading, and at 1, 3, 5, 10, and 15 min of recovery. [Results] At the anaerobic threshold time point, the exercise group showed a significantly longer time to reach anaerobic threshold. The exercise group showed significantly higher values for the time to reach VO2max, maximal values of ventilation, maximal oxygen uptake and maximal values of ventilation / heart rate. Significant changes were observed in the value of ventilation volumes at the 1- and 5-min recovery time points within the exercise group; oxygen uptake and maximal oxygen uptake were significantly different at the 5- and 10-min time points; heart rate was significantly different at the 1- and 3-min time points; and maximal values of ventilation / heart rate was significantly different at the 5-min time point. The exercise group showed significant decreases in blood lactate levels at the 15- and 30-min recovery time points. [Conclusion] The study results revealed that Taekwondo as a security martial arts training increases the maximal oxygen uptake and anaerobic threshold and accelerates an individual's recovery to the normal state of cardiorespiratory fitness and blood lactate level. These results are expected to contribute to the execution of more effective security services in emergencies in which violence can occur.

  9. Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments.

    PubMed

    Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn

    2015-01-01

    Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect's 3D body point's time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point's time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point's time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters' walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman's bias and limits of agreement. Body point's time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point's time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner.

  10. Nucleation of actin polymerization by gelsolin.

    PubMed

    Ditsch, A; Wegner, A

    1994-08-15

    The time-course of assembly of actin with gelsolin was measured by the fluorescence increase of a fluorescent label covalently linked to actin. The actin concentrations ranged from values far below the critical concentration to values above the critical concentration of the pointed ends of actin filaments. If the concentration of actin was in the range of the critical monomer concentration (0.64 microM), the time-course of the concentration of actin assembled with gelsolin revealed a sigmoidal shape. At higher actin concentrations the time-course of association of actin with gelsolin approximated an exponential curve. The measured time-courses of assembly were quantitatively interpreted by kinetic rate equations. A poor fit was obtained if two actin molecules were assumed to bind to gelsolin to form a 1:2 gelsolin-actin complex and subsequently further actin molecules were assumed to polymerize onto the 1:2 gelsolin-actin complex toward the pointed end. A considerably better agreement between calculated and measured time-courses was achieved if additional creation of actin filaments by fast fragmentation of newly formed actin filaments by not yet consumed gelsolin was assumed to occur. This suggests that both polymerization of actin onto gelsolin and fragmentation of actin filaments contribute to formation of new actin filaments by gelsolin. Furthermore it could be demonstrated that below the critical monomer concentration appreciable amounts of actin are incorporated into gelsolin-actin oligomers.

  11. Site-Dependent Fluorescence Decay of Malachite Green Doped in Onion Cell

    NASA Astrophysics Data System (ADS)

    Nakatsuka, Hiroki; Sekine, Masaya; Suzuki, Yuji; Hattori, Toshiaki

    1999-03-01

    Time-resolved fluorescence measurements of malachite green dye moleculesdoped in onion cells were carried out.The fluorescence decay time was dependent on the individual cell and on theposition of the dye in a cell, which reflect the microscopic dynamics of each boundsite.Upon cooling, the decay time increased and this increase was accelerated ataround the freezing point of the onion cell.

  12. Numerical Study of a High Head Francis Turbine with Measurements from the Francis-99 Project

    NASA Astrophysics Data System (ADS)

    Wallimann, H.; Neubauer, R.

    2015-01-01

    For the Francis-99 project initiated by the Norwegian University of Science and Technology (NTNU, Norway) and the Luleå University of Technology (LTU, Sweden) numerical flow simulation has been performed and the results compared to experimentally obtained data. The full machine including spiral casing, stay vanes, guide vanes, runner and draft tube was simulated transient for three operating points defined by the Francis-99 organisers. Two sets of results were created with differing time steps. Additionally, a reduced domain was simulated in a stationary manner to create a complete cut along constant prototype head and constant prototype discharge. The efficiency values and shape of the curves have been investigated and compared to the experimental data. Special attention has been given to rotor stator interaction (RSI). Signals from several probes and their counterpart in the simulation have been processed to evaluate the pressure fluctuations occurring due to the RSI. The direct comparison of the hydraulic efficiency obtained by the full machine simulation compared to the experimental data showed no improvement when using a 1° time step compared to a coarser 2° time step. At the BEP the 2° time step even showed a slightly better result with an absolute deviation 1.08% compared with 1.24% for the 1° time step. At the other two operating points the simulation results were practically identical but fell short of predicting the measured values. The RSI evaluation was done using the results of the 2° time step simulation, which proved to be an adequate setting to reproduce pressure signals with peaks at the correct frequencies. The simulation results showed the highest amplitudes in the vaneless space at the BEP operating point at a location different from the probe measurements available. This implies that not only the radial distance, but the shape of the vaneless space influences the RSI.

  13. Real-time color measurement using active illuminant

    NASA Astrophysics Data System (ADS)

    Tominaga, Shoji; Horiuchi, Takahiko; Yoshimura, Akihiko

    2010-01-01

    This paper proposes a method for real-time color measurement using active illuminant. A synchronous measurement system is constructed by combining a high-speed active spectral light source and a high-speed monochrome camera. The light source is a programmable spectral source which is capable of emitting arbitrary spectrum in high speed. This system is the essential advantage of capturing spectral images without using filters in high frame rates. The new method of real-time colorimetry is different from the traditional method based on the colorimeter or the spectrometers. We project the color-matching functions onto an object surface as spectral illuminants. Then we can obtain the CIE-XYZ tristimulus values directly from the camera outputs at every point on the surface. We describe the principle of our colorimetric technique based on projection of the color-matching functions and the procedure for realizing a real-time measurement system of a moving object. In an experiment, we examine the performance of real-time color measurement for a static object and a moving object.

  14. Optimized stereo matching in binocular three-dimensional measurement system using structured light.

    PubMed

    Liu, Kun; Zhou, Changhe; Wei, Shengbin; Wang, Shaoqing; Fan, Xin; Ma, Jianyong

    2014-09-10

    In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.

  15. Mark Tracking: Position/orientation measurements using 4-circle mark and its tracking experiments

    NASA Technical Reports Server (NTRS)

    Kanda, Shinji; Okabayashi, Keijyu; Maruyama, Tsugito; Uchiyama, Takashi

    1994-01-01

    Future space robots require position and orientation tracking with visual feedback control to track and capture floating objects and satellites. We developed a four-circle mark that is useful for this purpose. With this mark, four geometric center positions as feature points can be extracted from the mark by simple image processing. We also developed a position and orientation measurement method that uses the four feature points in our mark. The mark gave good enough image measurement accuracy to let space robots approach and contact objects. A visual feedback control system using this mark enabled a robot arm to track a target object accurately. The control system was able to tolerate a time delay of 2 seconds.

  16. Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics

    NASA Astrophysics Data System (ADS)

    Kohira, K.; Masuda, H.

    2017-09-01

    A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  17. On the Limitations of Taylor’s Hypothesis in Parker Solar Probe’s Measurements near the Alfvén Critical Point

    NASA Astrophysics Data System (ADS)

    Bourouaine, Sofiane; Perez, Jean C.

    2018-05-01

    In this Letter, we present an analysis of two-point, two-time correlation functions from high-resolution numerical simulations of Reflection-driven Alfvén Turbulence near the Alfvén critical point r c. The simulations model the turbulence in a prescribed background solar wind model chosen to match observational constraints. This analysis allows us to investigate the temporal decorrelation of solar wind turbulence and the validity of Taylor’s approximation near the heliocentric distance r c, which Parker Solar Probe (PSP) is expected to explore in the coming years. The simulations show that the temporal decay of the Fourier-transformed turbulence decorrelation function is better described by a Gaussian model rather than a pure exponential time decay, and that the decorrelation frequency is almost linear with perpendicular wave number k ⊥ (perpendicular with respect to the background magnetic field {{\\boldsymbol{B}}}0). Based on the simulations, we conclude that Taylor’s approximation cannot be used in this instance to provide a connection between the frequency ω of the time signal (measured in the probe frame) and the wavevector k ⊥ of the fluctuations because the frequency k ⊥ V sc (V sc is the spacecraft speed) near r c is comparable to the estimated decorrelation frequency. However, the use of Taylor’s approximation still leads to the correct spectral indices of the power spectra measured at the spacecraft frame. In this Letter, based on a Gaussian model, we suggest a modified relationship between ω and k ⊥, which might be useful in the interpretation of future PSP measurements.

  18. Automated Algorithm for J-Tpeak and Tpeak-Tend Assessment of Drug-Induced Proarrhythmia Risk

    DOE PAGES

    Johannesen, Lars; Vicente, Jose; Hosseini, Meisam; ...

    2016-12-30

    Prolongation of the heart rate corrected QT (QTc) interval is a sensitive marker of torsade de pointes risk; however it is not specific as QTc prolonging drugs that block inward currents are often not associated with torsade. Recent work demonstrated that separate analysis of the heart rate corrected J-T peakc (J-T peakc) and T peak-T end intervals can identify QTc prolonging drugs with inward current block and is being proposed as a part of a new cardiac safety paradigm for new drugs (the “CiPA” initiative). In this work, we describe an automated measurement methodology for assessment of the J-T peakcmore » and T peak-T end intervals using the vector magnitude lead. The automated measurement methodology was developed using data from one clinical trial and was evaluated using independent data from a second clinical trial. Comparison between the automated and the prior semi-automated measurements shows that the automated algorithm reproduces the semi-automated measurements with a mean difference of single-deltas <1 ms and no difference in intra-time point variability (p for all > 0.39). In addition, the time-profile of the baseline and placebo-adjusted changes are within 1 ms for 63% of the time-points (86% within 2 ms). Importantly, the automated results lead to the same conclusions about the electrophysiological mechanisms of the studied drugs. We have developed an automated algorithm for assessment of J-T peakc and T peak-T end intervals that can be applied in clinical drug trials. Under the CiPA initiative this ECG assessment would determine if there are unexpected ion channel effects in humans compared to preclinical studies. In conclusion, the algorithm is being released as open-source software.« less

  19. Automated Algorithm for J-Tpeak and Tpeak-Tend Assessment of Drug-Induced Proarrhythmia Risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johannesen, Lars; Vicente, Jose; Hosseini, Meisam

    Prolongation of the heart rate corrected QT (QTc) interval is a sensitive marker of torsade de pointes risk; however it is not specific as QTc prolonging drugs that block inward currents are often not associated with torsade. Recent work demonstrated that separate analysis of the heart rate corrected J-T peakc (J-T peakc) and T peak-T end intervals can identify QTc prolonging drugs with inward current block and is being proposed as a part of a new cardiac safety paradigm for new drugs (the “CiPA” initiative). In this work, we describe an automated measurement methodology for assessment of the J-T peakcmore » and T peak-T end intervals using the vector magnitude lead. The automated measurement methodology was developed using data from one clinical trial and was evaluated using independent data from a second clinical trial. Comparison between the automated and the prior semi-automated measurements shows that the automated algorithm reproduces the semi-automated measurements with a mean difference of single-deltas <1 ms and no difference in intra-time point variability (p for all > 0.39). In addition, the time-profile of the baseline and placebo-adjusted changes are within 1 ms for 63% of the time-points (86% within 2 ms). Importantly, the automated results lead to the same conclusions about the electrophysiological mechanisms of the studied drugs. We have developed an automated algorithm for assessment of J-T peakc and T peak-T end intervals that can be applied in clinical drug trials. Under the CiPA initiative this ECG assessment would determine if there are unexpected ion channel effects in humans compared to preclinical studies. In conclusion, the algorithm is being released as open-source software.« less

  20. Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples

    NASA Astrophysics Data System (ADS)

    Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.

    2014-12-01

    Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.

  1. Magnetic field gradients inferred from multi-point measurements of Cluster FGM and EDI

    NASA Astrophysics Data System (ADS)

    Teubenbacher, Robert; Nakamura, Rumi; Giner, Lukas; Plaschke, Ferdinand; Baumjohann, Wolfgang; Magnes, Werner; Eichelberger, Hans; Steller, Manfred; Torbert, Roy

    2013-04-01

    We use Cluster data from fluxgate magnetometer (FGM) and electron drift instrument (EDI) to determine the magnetic field gradients in the near-Earth magnetotail. Here we use the magnetic field data from FGM measurements as well as the gyro-time data of electrons determined from the time of flight measurements of EDI. The results are compared with the values estimated from empirical magnetic field models for different magnetospheric conditions. We also estimated the spin axis offset of FGM based on comparison between EDI and FGM data and discuss the possible effect in determining the current sheet characteristics.

  2. Siblings, Theory of Mind, and Executive Functioning in Children Aged 3-6 Years: New Longitudinal Evidence

    ERIC Educational Resources Information Center

    McAlister, Anna R.; Peterson, Candida C.

    2013-01-01

    Longitudinal data were obtained from 157 children aged 3 years 3 months to 5 years 6 months at Time 1. At Time 2 these children had aged an average of 12 months. Theory of mind (ToM) and executive functioning (EF) were measured at both time points. Results suggest that Time 1 ToM scores predict Time 2 EF scores. Detailed examination of sibling…

  3. Sonic-boom ground-pressure measurements from Apollo 15

    NASA Technical Reports Server (NTRS)

    Hilton, D. A.; Henderson, H. R.; Mckinney, R.

    1972-01-01

    Sonic boom pressure signatures recorded during the launch and reentry phases of the Apollo 15 mission are presented. The measurements were obtained along the vehicle ground track at 87 km and 970 km downrange from the launch site during ascent; and at 500 km, 55.6 km, and 12.9 km from the splashdown point during reentry. Tracings of the measured signatures are included along with values of the overpressure, impulse, time duration, and rise times. Also included are brief descriptions of the launch and recovery test areas in which the measurements were obtained, the sonic boom instrumentation deployment, flight profiles and operating conditions for the launch vehicle and spacecraft, surface weather information at the measuring sites, and high altitude weather information for the general measurement areas.

  4. Electroencephalogram power changes as a correlate of chemotherapy-associated fatigue and cognitive dysfunction.

    PubMed

    Moore, Halle C F; Parsons, Michael W; Yue, Guang H; Rybicki, Lisa A; Siemionow, Wlodzimierz

    2014-08-01

    Persistent fatigue and cognitive dysfunction are poorly understood potential long-term effects of adjuvant chemotherapy. In this pilot study, we assessed the value of electroencephalogram (EEG) power measurements as a means to evaluate physical and mental fatigue associated with chemotherapy. Women planning to undergo adjuvant chemotherapy for breast cancer and healthy controls underwent neurophysiologic assessments at baseline, during the time of chemotherapy treatment, and at 1 year. Repeated measures analysis of variance was used to analyze the data. Compared with controls, patients reported more subjective fatigue at baseline that increased during chemotherapy and did not entirely resolve by 1 year. Performance on endurance testing was similar in patients versus controls at all time points; however, values of EEG power increased after a physical task in patients during chemotherapy but not controls. Compared with controls, subjective mental fatigue was similar for patients at baseline and 1 year but worsened during chemotherapy. Patients performed similarly to controls on formal cognitive testing at all time points, but EEG activity after the cognitive task was increased in patients only during chemotherapy. EEG power measurement has the potential to provide a sensitive neurophysiologic correlate of cancer treatment-related fatigue and cognitive dysfunction.

  5. Ergodic Theory, Interpretations of Probability and the Foundations of Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    van Lith, Janneke

    The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time averages (albeit for a special class of systems, and up to a measure zero set of exceptions). Secondly, one argues that actual measurements of thermodynamic quantities yield time averaged quantities, since measurements take a long time. The combination of these two points is held to be an explanation why calculating microcanonical phase averages is a successful algorithm for predicting the values of thermodynamic observables. It is also well known that this account is problematic. This survey intends to show that ergodic theory nevertheless may have important roles to play, and it explores three other uses of ergodic theory. Particular attention is paid, firstly, to the relevance of specific interpretations of probability, and secondly, to the way in which the concern with systems in thermal equilibrium is translated into probabilistic language. With respect to the latter point, it is argued that equilibrium should not be represented as a stationary probability distribution as is standardly done; instead, a weaker definition is presented.

  6. q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans.

    PubMed

    Golkov, Vladimir; Dosovitskiy, Alexey; Sperl, Jonathan I; Menzel, Marion I; Czisch, Michael; Samann, Philipp; Brox, Thomas; Cremers, Daniel

    2016-05-01

    Numerous scientific fields rely on elaborate but partly suboptimal data processing pipelines. An example is diffusion magnetic resonance imaging (diffusion MRI), a non-invasive microstructure assessment method with a prominent application in neuroimaging. Advanced diffusion models providing accurate microstructural characterization so far have required long acquisition times and thus have been inapplicable for children and adults who are uncooperative, uncomfortable, or unwell. We show that the long scan time requirements are mainly due to disadvantages of classical data processing. We demonstrate how deep learning, a group of algorithms based on recent advances in the field of artificial neural networks, can be applied to reduce diffusion MRI data processing to a single optimized step. This modification allows obtaining scalar measures from advanced models at twelve-fold reduced scan time and detecting abnormalities without using diffusion models. We set a new state of the art by estimating diffusion kurtosis measures from only 12 data points and neurite orientation dispersion and density measures from only 8 data points. This allows unprecedentedly fast and robust protocols facilitating clinical routine and demonstrates how classical data processing can be streamlined by means of deep learning.

  7. Prevalence and co-occurrence of addictive behaviors among former alternative high school youth: A longitudinal follow-up study

    PubMed Central

    Sussman, Steve; Pokhrel, Pallav; Sun, Ping; Rohrbach, Louise A.; Spruijt-Metz, Donna

    2015-01-01

    Background and Aims Recent work has studied addictions using a matrix measure, which taps multiple addictions through single responses for each type. This is the first longitudinal study using a matrix measure. Methods We investigated the use of this approach among former alternative high school youth (average age = 19.8 years at baseline; longitudinal n = 538) at risk for addictions. Lifetime and last 30-day prevalence of one or more of 11 addictions reviewed in other work was the primary focus (i.e., cigarettes, alcohol, hard drugs, shopping, gambling, Internet, love, sex, eating, work, and exercise). These were examined at two time-points one year apart. Latent class and latent transition analyses (LCA and LTA) were conducted in Mplus. Results Prevalence rates were stable across the two time-points. As in the cross-sectional baseline analysis, the 2-class model (addiction class, non-addiction class) fit the data better at follow-up than models with more classes. Item-response or conditional probabilities for each addiction type did not differ between time-points. As a result, the LTA model utilized constrained the conditional probabilities to be equal across the two time-points. In the addiction class, larger conditional probabilities (i.e., 0.40−0.49) were found for love, sex, exercise, and work addictions; medium conditional probabilities (i.e., 0.17−0.27) were found for cigarette, alcohol, other drugs, eating, Internet and shopping addiction; and a small conditional probability (0.06) was found for gambling. Discussion and Conclusions Persons in an addiction class tend to remain in this addiction class over a one-year period. PMID:26551909

  8. Factors That Influence the Rating of Perceived Exertion After Endurance Training.

    PubMed

    Roos, Lilian; Taube, Wolfgang; Tuch, Carolin; Frei, Klaus Michael; Wyss, Thomas

    2018-03-15

    Session rating of perceived exertion (sRPE) is an often used measure to assess athletes' training load. However, little is known which factors could optimize the quality of data collection thereof. The aim of the present study was to investigate the effects of (i) the survey methods and (ii) the time points when sRPE was assessed on the correlation between subjective (sRPE) and objective (heart rate training impulse; TRIMP) assessment of training load. In the first part, 45 well-trained subjects (30 men, 15 women) performed 20 running sessions with a heart rate monitor and reported sRPE 30 minutes after training cessation. For the reporting the subjects were grouped into three survey method groups (paper-pencil, online questionnaire, and mobile device). In the second part of the study, another 40 athletes (28 men, 12 women) performed 4x5 running sessions with the four time points to report the sRPE randomly assigned (directly after training cessation, 30 minutes post-exercise, in the evening of the same day, the next morning directly after waking up). The assessment of sRPE is influenced by time point, survey method, TRIMP, sex, and training type. It is recommended to assess sRPE values via a mobile device or online tool, as the survey method "paper" displayed lower correlations between sRPE and TRIMP. Subjective training load measures are highly individual. When compared at the same relative intensity, lower sRPE values were reported by women, for the training types representing slow runs, and for time points with greater duration between training cessation and sRPE assessment. The assessment method for sRPE should be kept constant for each athlete and comparisons between athletes or sexes are not recommended.

  9. The effect of physical activity on adult obesity: evidence from the Canadian NPHS panel.

    PubMed

    Sarma, Sisira; Zaric, Gregory S; Campbell, M Karen; Gilliland, Jason

    2014-07-01

    Although physical activity has been considered as an important modifiable risk factor for obesity, the empirical evidence on the relationship between physical activity and obesity is mixed. Observational studies in the public health literature fail to account for time-invariant unobserved heterogeneity and dynamics of weight, leading to biased estimation of the effect of physical activity on obesity. To overcome this limitation, we propose dynamic fixed-effects models to account for unobserved heterogeneity bias and the dynamics of obesity. We use nationally representative longitudinal data on the cohort of adults aged 18-50 years in 1994/95 from Canada's National Population Health Survey and followed them over 16 years. Obesity is measured by BMI (body mass index). After controlling for a wide range of socio-economic factors, the impact of four alternative measures of leisure-time physical activity (LTPA) and work-related physical activity (WRPA) are analyzed. The results show that each measure of LTPA exerts a negative effect on BMI and the effects are larger for females. Our key results show that participation in LTPA exceeding 1.5 kcal/kg per day (i.e., at least 30 min of walking) reduces BMI by about 0.11-0.14 points in males and 0.20 points in females relative to physically inactive counterparts. Compared to those who are inactive at workplace, being able to stand or walk at work is associated with a reduction in BMI in the range of 0.16-0.19 points in males and 0.24-0.28 points in females. Lifting loads at workplace is associated with a reduction in BMI by 0.2-0.3 points in males and 0.3-0.4 points in females relative to those who are reported sedentary. Policies aimed at promotion of LTPA combined with WRPA like walking or climbing stairs daily would help reduce adult obesity risks. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. The 1983 tail-era series. Volume 1: ISEE 3 plasma

    NASA Technical Reports Server (NTRS)

    Fairfield, D. H.; Phillips, J. L.

    1991-01-01

    Observations from the ISEE 3 electron analyzer are presented in plots. Electrons were measured in 15 continuous energy levels between 8.5 and 1140 eV during individual 3-sec spacecraft spins. Times associated with each data point are the beginning time of the 3 sec data collection interval. Moments calculated from the measured distribution function are shown as density, temperature, velocity, and velocity azimuthal angle. Spacecraft ephemeris is shown at the bottom in GSE and GSM coordinates in units of Earth radii, with vertical ticks on the time axis corresponding to the printed positions.

  11. Defining toxicological tipping points in neuronal network development.

    PubMed

    Frank, Christopher L; Brown, Jasmine P; Wallace, Kathleen; Wambaugh, John F; Shah, Imran; Shafer, Timothy J

    2018-02-02

    Measuring electrical activity of neural networks by microelectrode array (MEA) has recently shown promise for screening level assessments of chemical toxicity on network development and function. Important aspects of interneuronal communication can be quantified from a single MEA recording, including individual firing rates, coordinated bursting, and measures of network synchrony, providing rich datasets to evaluate chemical effects. Further, multiple recordings can be made from the same network, including during the formation of these networks in vitro. The ability to perform multiple recording sessions over the in vitro development of network activity may provide further insight into developmental effects of neurotoxicants. In the current study, a recently described MEA-based screen of 86 compounds in primary rat cortical cultures over 12 days in vitro was revisited to establish a framework that integrates all available primary measures of electrical activity from MEA recordings into a composite metric for deviation from normal activity (total scalar perturbation). Examining scalar perturbations over time and increasing concentration of compound allowed for definition of critical concentrations or "tipping points" at which the neural networks switched from recovery to non-recovery trajectories for 42 compounds. These tipping point concentrations occurred at predominantly lower concentrations than those causing overt cell viability loss or disrupting individual network parameters, suggesting tipping points may be a more sensitive measure of network functional loss. Comparing tipping points for six compounds with plasma concentrations known to cause developmental neurotoxicity in vivo demonstrated strong concordance and suggests there is potential for using tipping points for chemical prioritization. Published by Elsevier Inc.

  12. A simple procedure for γ- γ lifetime measurements using multi-element fast-timing arrays

    NASA Astrophysics Data System (ADS)

    Régis, J.-M.; Dannhoff, M.; Jolie, J.

    2018-07-01

    The lifetimes of nuclear excited states are important observables in nuclear physics. Their precise measurement is of key importance for developing and testing nuclear models as they are directly linked with the quantum nature of the nuclear system. The γ- γ timing technique represents a direct lifetime determination by means of time-difference measurements between the γ rays which directly feed and decay from a nuclear excited state. Using arrays of very-fast scintillator detectors, picosecond-sensitive time-difference measurements can be performed. We propose to construct a symmetric energy-energy-time cube as is usually done to perform γ- γ coincidence analyses and lifetime determination with high-resolution germanium detectors. By construction, a symmetric mean time-walk characteristics is obtained, that can be precisely determined and used as a single time correction for all the data independently of the detectors. We present the results of timing characteristics measurements of an array with six LaBr3(Ce) detectors, as obtained using a 152Eu point γ-ray source. Compared with a single detector pair, the time resolution of the symmetrised time-difference spectra of the array is nearly unaffected.

  13. Real-time and accurate rail wear measurement method and experimental analysis.

    PubMed

    Liu, Zhen; Li, Fengjiao; Huang, Bangkui; Zhang, Guangjun

    2014-08-01

    When a train is running on uneven or curved rails, it generates violent vibrations on the rails. As a result, the light plane of the single-line structured light vision sensor is not vertical, causing errors in rail wear measurements (referred to as vibration errors in this paper). To avoid vibration errors, a novel rail wear measurement method is introduced in this paper, which involves three main steps. First, a multi-line structured light vision sensor (which has at least two linear laser projectors) projects a stripe-shaped light onto the inside of the rail. Second, the central points of the light stripes in the image are extracted quickly, and the three-dimensional profile of the rail is obtained based on the mathematical model of the structured light vision sensor. Then, the obtained rail profile is transformed from the measurement coordinate frame (MCF) to the standard rail coordinate frame (RCF) by taking the three-dimensional profile of the measured rail waist as the datum. Finally, rail wear constraint points are adopted to simplify the location of the rail wear points, and the profile composed of the rail wear points are compared with the standard rail profile in RCF to determine the rail wear. Both real data experiments and simulation experiments show that the vibration errors can be eliminated when the proposed method is used.

  14. Scanning Quantum Cryogenic Atom Microscope

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Kollár, Alicia J.; Taylor, Stephen F.; Turner, Richard W.; Lev, Benjamin L.

    2017-03-01

    Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed-matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented dc-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (approximately 2 μ m ) or 6 nT /√{Hz } per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly 100 points with an effective field sensitivity of 600 pT /√{Hz } for each point during the same time as a point-by-point scanner measures these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly 2 orders of magnitude improvement in magnetic flux sensitivity (down to 10-6 Φ0/√{Hz } ) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns in a system where samples may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge-transport images at temperatures from room temperature to 4 K in unconventional superconductors and topologically nontrivial materials.

  15. Energy, time, and channel evolution in catastrophically disturbed fluvial systems

    USGS Publications Warehouse

    Simon, A.

    1992-01-01

    Specific energy is shown to decrease nonlinearly with time during channel evolution and provides a measure of reductions in available energy at the channel bed. Data from two sites show convergence towards a minimum specific energy with time. Time-dependent reductions in specific energy at a point act in concert with minimization of the rate of energy dissipation over a reach during channel evolution as the fluvial systems adjust to a new equilibrium.

  16. On the reliability of the holographic method for measurement of soft tissue modifications during periodontal therapy

    NASA Astrophysics Data System (ADS)

    Stratul, Stefan-Ioan; Sinescu, Cosmin; Negrutiu, Meda; de Sabata, Aldo; Rominu, Mihai; Ogodescu, Alexandru; Rusu, Darian

    2014-01-01

    Holographic evaluations count among recent measurement tools in orthodontics and prosthodontics. This research introduces holography as an assessment method of 3D variations of gingival retractions. The retraction of gingiva on frontal regions of 5 patients with periodontitis was measured in six points and was evaluated by holographic methods using a He-Ne laser device (1mV, Superlum, Carrigtwohill, Ireland) inside a holographic bank of 200 x 100cm. Impressions were taken during first visit and cast models were manufactured. Six months after the end of periodontal treatment, clinical measurements were repeated and the hologram of the first model was superimposed on a final model cast, by using reference points, while maintaining the optical geometric perimeters. The retractions were evaluated 3D in every point using a dedicated software (Sigma Scan Pro,Systat Software, SanJose, CA, USA). The Wilcoxon test was used to compare the mean recession changes between baseline and six months after treatment, and between values in vivo and the values on hologram. No statistically significant differences between values in vivo and on the hologram were found. In conclusion, holography provides a valuable tool for assessing gingival retractions on virtual models. The data can be stored, reproduced, transmitted and compared at a later time point with accuracy.

  17. Protocol: Testing the Relevance of Acupuncture Theory in the Treatment of Myofascial Pain in the Upper Trapezius Muscle.

    PubMed

    Elsdon, Dale S; Spanswick, Selina; Zaslawski, Chris; Meier, Peter C

    2017-01-01

    A protocol for a prospective single-blind parallel four-arm randomized placebo-controlled trial with repeated measures was designed to test the effects of various acupuncture methods compared with sham. Eighty self-selected participants with myofascial pain in the upper trapezius muscle were randomized into four groups. Group 1 received acupuncture to a myofascial trigger point (MTrP) in the upper trapezius. Group 2 received acupuncture to the MTrP in addition to relevant distal points. Group 3 received acupuncture to the relevant distal points only. Group 4 received a sham treatment to both the MTrP and distal points using a deactivated acupuncture laser device. Treatment was applied four times within 2 weeks with outcomes measured throughout the trial and at 2 weeks and 4 weeks posttreatment. Outcome measurements were a 100-mm visual analog pain scale, SF-36, pressure pain threshold, Neck Disability Index, the Upper Extremity Functional Index, lateral flexion in the neck, McGill Pain Questionnaire, Massachusetts General Hospital Acupuncture Sensation Scale, Working Alliance Inventory (short form), and the Credibility Expectance Questionnaire. Two-way analysis of variance (ANOVA) with repeated measures were used to assess the differences between groups. Copyright © 2017 Medical Association of Pharmacopuncture Institute. Published by Elsevier B.V. All rights reserved.

  18. Estimation of surface heat and moisture fluxes over a prairie grassland. I - In situ energy budget measurements incorporating a cooled mirror dew point hygrometer

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Crosson, William L.; Tanner, Bertrand D.

    1992-01-01

    Attention is focused on in situ measurements taken during FIFE required to support the development and validation of a biosphere model. Seasonal time series of surface flux measurements obtained from two surface radiation and energy budget stations utilized to support the FIFE surface flux measurement subprogram are examined. Data collection and processing procedures are discussed along with the measurement analysis for the complete 1987 test period.

  19. Do Activity Level Outcome Measures Commonly Used in Neurological Practice Assess Upper-Limb Movement Quality?

    PubMed

    Demers, Marika; Levin, Mindy F

    2017-07-01

    Movement is described in terms of task-related end point characteristics in external space and movement quality (joint rotations in body space). Assessment of upper-limb (UL) movement quality can assist therapists in designing effective treatment approaches for retraining lost motor elements and provide more detailed measurements of UL motor improvements over time. To determine the extent to which current activity level outcome measures used in neurological practice assess UL movement quality. Outcome measures assessing arm/hand function at the International Classification of Function activity level recommended by neurological clinical practice guidelines were reviewed. Measures assessing the UL as part of a general mobility assessment, those strictly evaluating body function/structure or participation, and paediatric measures were excluded. In all, 15 activity level outcome measures were identified; 9 measures assess how movement is performed by measuring either end point characteristics or movement quality. However, except for the Reaching Performance Scale for Stroke and the Motor Evaluation Scale for Upper Extremity in Stroke Patients, these measures only account for deficits indirectly by giving a partial score if movements are slower or if the person experiences difficulties. Six outcome measures neither assess any parameters related to movement quality, nor distinguish between improvements resulting from motor compensation or recovery of desired movement strategies. Current activity measures may not distinguish recovery from compensation and adequately track changes in movement quality over time. Movement quality may be incorporated into clinical assessment using observational kinematics with or without low-cost motion tracking technology.

  20. Evaluation of Nintendo Wii Balance Board as a Tool for Measuring Postural Stability After Sport-Related Concussion

    PubMed Central

    Merchant-Borna, Kian; Jones, Courtney Marie Cora; Janigro, Mattia; Wasserman, Erin B.; Clark, Ross A.; Bazarian, Jeffrey J.

    2017-01-01

    Context: Recent changes to postconcussion guidelines indicate that postural-stability assessment may augment traditional neurocognitive testing when making return-to-participation decisions. The Balance Error Scoring System (BESS) has been proposed as 1 measure of balance assessment. A new, freely available software program to accompany the Nintendo Wii Balance Board (WBB) system has recently been developed but has not been tested in concussed patients. Objective: To evaluate the feasibility of using the WBB to assess postural stability across 3 time points (baseline and postconcussion days 3 and 7) and to assess concurrent and convergent validity of the WBB with other traditional measures (BESS and Immediate Post-Concussion Assessment and Cognitive Test [ImPACT] battery) of assessing concussion recovery. Design: Cohort study. Setting: Athletic training room and collegiate sports arena. Patients or Other Participants: We collected preseason baseline data from 403 National Collegiate Athletic Association Division I and III student-athletes participating in contact sports and studied 19 participants (age = 19.2 ± 1.2 years, height = 177.7 ± 8.0 cm, mass = 75.3 ± 16.6 kg, time from baseline to day 3 postconcussion = 27.1 ± 36.6 weeks) who sustained concussions. Main Outcome Measure(s): We assessed balance using single-legged and double-legged stances for both the BESS and WBB, focusing on the double-legged, eyes-closed stance for the WBB, and used ImPACT to assess neurocognition at 3 time points. Descriptive statistics were used to characterize the sample. Mean differences and Spearman rank correlation coefficients were used to determine differences within and between metrics over the 3 time points. Individual-level changes over time were also assessed graphically. Results: The WBB demonstrated mean changes between baseline and day 3 postconcussion and between days 3 and 7 postconcussion. It was correlated with the BESS and ImPACT for several measures and identified 2 cases of abnormal balance postconcussion that would not have been identified via the BESS. Conclusions: When accompanied by the appropriate analytic software, the WBB may be an alternative for assessing postural stability in concussed student-athletes and may provide additional information to that obtained via the BESS and ImPACT. However, verification among independent samples is required. PMID:28387551

  1. Wave directional spreading from point field measurements.

    PubMed

    McAllister, M L; Venugopal, V; Borthwick, A G L

    2017-04-01

    Ocean waves have multidirectional components. Most wave measurements are taken at a single point, and so fail to capture information about the relative directions of the wave components directly. Conventional means of directional estimation require a minimum of three concurrent time series of measurements at different spatial locations in order to derive information on local directional wave spreading. Here, the relationship between wave nonlinearity and directionality is utilized to estimate local spreading without the need for multiple concurrent measurements, following Adcock & Taylor (Adcock & Taylor 2009 Proc. R. Soc. A 465 , 3361-3381. (doi:10.1098/rspa.2009.0031)), with the assumption that directional spreading is frequency independent. The method is applied to measurements recorded at the North Alwyn platform in the northern North Sea, and the results compared against estimates of wave spreading by conventional measurement methods and hindcast data. Records containing freak waves were excluded. It is found that the method provides accurate estimates of wave spreading over a range of conditions experienced at North Alwyn, despite the noisy chaotic signals that characterize such ocean wave data. The results provide further confirmation that Adcock and Taylor's method is applicable to metocean data and has considerable future promise as a technique to recover estimates of wave spreading from single point wave measurement devices.

  2. Wave directional spreading from point field measurements

    PubMed Central

    Venugopal, V.; Borthwick, A. G. L.

    2017-01-01

    Ocean waves have multidirectional components. Most wave measurements are taken at a single point, and so fail to capture information about the relative directions of the wave components directly. Conventional means of directional estimation require a minimum of three concurrent time series of measurements at different spatial locations in order to derive information on local directional wave spreading. Here, the relationship between wave nonlinearity and directionality is utilized to estimate local spreading without the need for multiple concurrent measurements, following Adcock & Taylor (Adcock & Taylor 2009 Proc. R. Soc. A 465, 3361–3381. (doi:10.1098/rspa.2009.0031)), with the assumption that directional spreading is frequency independent. The method is applied to measurements recorded at the North Alwyn platform in the northern North Sea, and the results compared against estimates of wave spreading by conventional measurement methods and hindcast data. Records containing freak waves were excluded. It is found that the method provides accurate estimates of wave spreading over a range of conditions experienced at North Alwyn, despite the noisy chaotic signals that characterize such ocean wave data. The results provide further confirmation that Adcock and Taylor's method is applicable to metocean data and has considerable future promise as a technique to recover estimates of wave spreading from single point wave measurement devices. PMID:28484326

  3. An Investigation of the Relation Between Contact Thermometry and Dew-Point Temperature Realization

    NASA Astrophysics Data System (ADS)

    Benyon, R.; Böse, N.; Mitter, H.; Mutter, D.; Vicente, T.

    2012-09-01

    Precision optical dew-point hygrometers are the most commonly used transfer standards for the comparison of dew-point temperature realizations at National Metrology Institutes (NMIs) and for disseminating traceability to calibration laboratories. These instruments have been shown to be highly reproducible when properly used. In order to obtain the best performance, the resistance of the platinum resistance thermometer (PRT) embedded in the mirror is usually measured with an external, traceable resistance bridge or digital multimeter. The relation between the conventional calibration of miniature PRTs, prior to their assembly in the mirrors of state-of-the-art optical dew-point hygrometers and their subsequent calibration as dew-point temperature measurement devices, has been investigated. Standard humidity generators of three NMIs were used to calibrate hygrometers of different designs, covering the dew-point temperature range from -75 °C to + 95 °C. The results span more than a decade, during which time successive improvements and modifications were implemented by the manufacturer. The findings are presented and discussed in the context of enabling the optimum use of these transfer standards and as a basis for determining contributions to the uncertainty in their calibration.

  4. Gradient-free determination of isoelectric points of proteins on chip.

    PubMed

    Łapińska, Urszula; Saar, Kadi L; Yates, Emma V; Herling, Therese W; Müller, Thomas; Challa, Pavan K; Dobson, Christopher M; Knowles, Tuomas P J

    2017-08-30

    The isoelectric point (pI) of a protein is a key characteristic that influences its overall electrostatic behaviour. The majority of conventional methods for the determination of the isoelectric point of a molecule rely on the use of spatial gradients in pH, although significant practical challenges are associated with such techniques, notably the difficulty in generating a stable and well controlled pH gradient. Here, we introduce a gradient-free approach, exploiting a microfluidic platform which allows us to perform rapid pH change on chip and probe the electrophoretic mobility of species in a controlled field. In particular, in this approach, the pH of the electrolyte solution is modulated in time rather than in space, as in the case for conventional determinations of the isoelectric point. To demonstrate the general approachability of this platform, we have measured the isoelectric points of representative set of seven proteins, bovine serum albumin, β-lactoglobulin, ribonuclease A, ovalbumin, human transferrin, ubiquitin and myoglobin in microlitre sample volumes. The ability to conduct measurements in free solution thus provides the basis for the rapid determination of isoelectric points of proteins under a wide variety of solution conditions and in small volumes.

  5. Looking for Off-Fault Deformation and Measuring Strain Accumulation During the Past 70 years on a Portion of the Locked San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Vadman, M.; Bemis, S. P.

    2017-12-01

    Even at high tectonic rates, detection of possible off-fault plastic/aseismic deformation and variability in far-field strain accumulation requires high spatial resolution data and likely decades of measurements. Due to the influence that variability in interseismic deformation could have on the timing, size, and location of future earthquakes and the calculation of modern geodetic estimates of strain, we attempt to use historical aerial photographs to constrain deformation through time across a locked fault. Modern photo-based 3D reconstruction techniques facilitate the creation of dense point clouds from historical aerial photograph collections. We use these tools to generate a time series of high-resolution point clouds that span 10-20 km across the Carrizo Plain segment of the San Andreas fault. We chose this location due to the high tectonic rates along the San Andreas fault and lack of vegetation, which may obscure tectonic signals. We use ground control points collected with differential GPS to establish scale and georeference the aerial photograph-derived point clouds. With a locked fault assumption, point clouds can be co-registered (to one another and/or the 1.7 km wide B4 airborne lidar dataset) along the fault trace to calculate relative displacements away from the fault. We use CloudCompare to compute 3D surface displacements, which reflect the interseismic strain accumulation that occurred in the time interval between photo collections. As expected, we do not observe clear surface displacements along the primary fault trace in our comparisons of the B4 lidar data against the aerial photograph-derived point clouds. However, there may be small scale variations within the lidar swath area that represent near-fault plastic deformation. With large-scale historical photographs available for the Carrizo Plain extending back to at least the 1940s, we can potentially sample nearly half the interseismic period since the last major earthquake on this portion of this fault (1857). Where sufficient aerial photograph coverage is available, this approach has the potential to illuminate complex fault zone processes for this and other major strike-slip faults.

  6. Precisions Measurement for the Grasp of Welding Deformation amount of Time Series for Large-Scale Industrial Products

    NASA Astrophysics Data System (ADS)

    Abe, R.; Hamada, K.; Hirata, N.; Tamura, R.; Nishi, N.

    2015-05-01

    As well as the BIM of quality management in the construction industry, demand for quality management of the manufacturing process of the member is higher in shipbuilding field. The time series of three-dimensional deformation of the each process, and are accurately be grasped strongly demanded. In this study, we focused on the shipbuilding field, will be examined three-dimensional measurement method. The shipyard, since a large equipment and components are intricately arranged in a limited space, the installation of the measuring equipment and the target is limited. There is also the element to be measured is moved in each process, the establishment of the reference point for time series comparison is necessary to devise. In this paper will be discussed method for measuring the welding deformation in time series by using a total station. In particular, by using a plurality of measurement data obtained from this approach and evaluated the amount of deformation of each process.

  7. DL-sQUAL: A Multiple-Item Scale for Measuring Service Quality of Online Distance Learning Programs

    ERIC Educational Resources Information Center

    Shaik, Naj; Lowe, Sue; Pinegar, Kem

    2006-01-01

    Education is a service with multiplicity of student interactions over time and across multiple touch points. Quality teaching needs to be supplemented by consistent quality supporting services for programs to succeed under the competitive distance learning landscape. ServQual and e-SQ scales have been proposed for measuring quality of traditional…

  8. The procedures manual of the Environmental Measurements Laboratory. Volume 2, 28. edition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chieco, N.A.

    1997-02-01

    This report contains environmental sampling and analytical chemistry procedures that are performed by the Environmental Measurements Laboratory. The purpose of environmental sampling and analysis is to obtain data that describe a particular site at a specific point in time from which an evaluation can be made as a basis for possible action.

  9. Stability of Language Performance at 4 and 5 Years: Measurement and Participant Variability

    ERIC Educational Resources Information Center

    Eadie, Patricia; Nguyen, Cattram; Carlin, John; Bavin, Edith; Bretherton, Lesley; Reilly, Sheena

    2014-01-01

    Background: Language impairment (LI) in the preschool years is known to vary over time. Stability in the diagnosis of LI may be influenced by children's individual variability, the measurement error of commonly used assessment instruments and the cut-points used to define impairment. Aims: To investigate the agreement between two different…

  10. The Relationship of Life Events to Academic Performance in College Students.

    ERIC Educational Resources Information Center

    Knapp, Samuel

    Numerous studies have shown a correlation between life events and physical health, mental health, and behavioral measures such as impaired grade point average. Most of these studies have measured stressfulness by summing the total number of events experienced in a given time period. However, Vinokur and Selzer have shown that the amount of…

  11. GOES-R active vibration damping controller design, implementation, and on-orbit performance

    NASA Astrophysics Data System (ADS)

    Clapp, Brian R.; Weigl, Harald J.; Goodzeit, Neil E.; Carter, Delano R.; Rood, Timothy J.

    2018-01-01

    GOES-R series spacecraft feature a number of flexible appendages with modal frequencies below 3.0 Hz which, if excited by spacecraft disturbances, can be sources of undesirable jitter perturbing spacecraft pointing. To meet GOES-R pointing stability requirements, the spacecraft flight software implements an Active Vibration Damping (AVD) rate control law which acts in parallel with the nadir point attitude control law. The AVD controller commands spacecraft reaction wheel actuators based upon Inertial Measurement Unit (IMU) inputs to provide additional damping for spacecraft structural modes below 3.0 Hz which vary with solar wing angle. A GOES-R spacecraft dynamics and attitude control system identified model is constructed from pseudo-random reaction wheel torque commands and IMU angular rate response measurements occurring over a single orbit during spacecraft post-deployment activities. The identified Fourier model is computed on the ground, uplinked to the spacecraft flight computer, and the AVD controller filter coefficients are periodically computed on-board from the Fourier model. Consequently, the AVD controller formulation is based not upon pre-launch simulation model estimates but upon on-orbit nadir point attitude control and time-varying spacecraft dynamics. GOES-R high-fidelity time domain simulation results herein demonstrate the accuracy of the AVD identified Fourier model relative to the pre-launch spacecraft dynamics and control truth model. The AVD controller on-board the GOES-16 spacecraft achieves more than a ten-fold increase in structural mode damping for the fundamental solar wing mode while maintaining controller stability margins and ensuring that the nadir point attitude control bandwidth does not fall below 0.02 Hz. On-orbit GOES-16 spacecraft appendage modal frequencies and damping ratios are quantified based upon the AVD system identification, and the increase in modal damping provided by the AVD controller for each structural mode is presented. The GOES-16 spacecraft AVD controller frequency domain stability margins and nadir point attitude control bandwidth are presented along with on-orbit time domain disturbance response performance.

  12. GOES-R Active Vibration Damping Controller Design, Implementation, and On-Orbit Performance

    NASA Technical Reports Server (NTRS)

    Clapp, Brian R.; Weigl, Harald J.; Goodzeit, Neil E.; Carter, Delano R.; Rood, Timothy J.

    2017-01-01

    GOES-R series spacecraft feature a number of flexible appendages with modal frequencies below 3.0 Hz which, if excited by spacecraft disturbances, can be sources of undesirable jitter perturbing spacecraft pointing. In order to meet GOES-R pointing stability requirements, the spacecraft flight software implements an Active Vibration Damping (AVD) rate control law which acts in parallel with the nadir point attitude control law. The AVD controller commands spacecraft reaction wheel actuators based upon Inertial Measurement Unit (IMU) inputs to provide additional damping for spacecraft structural modes below 3.0 Hz which vary with solar wing angle. A GOES-R spacecraft dynamics and attitude control system identified model is constructed from pseudo-random reaction wheel torque commands and IMU angular rate response measurements occurring over a single orbit during spacecraft post-deployment activities. The identified Fourier model is computed on the ground, uplinked to the spacecraft flight computer, and the AVD controller filter coefficients are periodically computed on-board from the Fourier model. Consequently, the AVD controller formulation is based not upon pre-launch simulation model estimates but upon on-orbit nadir point attitude control and time-varying spacecraft dynamics. GOES-R high-fidelity time domain simulation results herein demonstrate the accuracy of the AVD identified Fourier model relative to the pre-launch spacecraft dynamics and control truth model. The AVD controller on-board the GOES-16 spacecraft achieves more than a ten-fold increase in structural mode damping of the fundamental solar wing mode while maintaining controller stability margins and ensuring that the nadir point attitude control bandwidth does not fall below 0.02 Hz. On-orbit GOES-16 spacecraft appendage modal frequencies and damping ratios are quantified based upon the AVD system identification, and the increase in modal damping provided by the AVD controller for each structural mode is presented. The GOES-16 spacecraft AVD controller frequency domain stability margins and nadir point attitude control bandwidth are presented along with on-orbit time domain disturbance response performance.

  13. Fast rockfall hazard assessment along a road section using the new LYNX Mobile Mapper Lidar

    NASA Astrophysics Data System (ADS)

    Dario, Carrea; Celine, Longchamp; Michel, Jaboyedoff; Marc, Choffet; Marc-Henri, Derron; Clement, Michoud; Andrea, Pedrazzini; Dario, Conforti; Michael, Leslar; William, Tompkinson

    2010-05-01

    The terrestrial laser scanning (TLS) is an active remote sensing technique providing high resolution point clouds of the topography. The high resolution digital elevations models (HRDEM) derived of these point clouds are an important tool for the stability analysis of slopes. The LYNX Mobile Mapper is a new TLS generation developed by Optech. Its particularity is to be mounted on a vehicle and providing a 360° high density point cloud at 200-khz measurement rate in a very short acquisition time. It is composed of two sensors improving the resolution and reducing the laser shadowing. The spatial resolution is better than 10 cm at 10 m range and at a velocity of 50 km/h and the reflectivity of the signal is around 20% at a distance of 200 m. The Lidar is also equipped with a DGPS and an inertial measurement unit (IMU) which gives real time position and georeferences directly the point cloud. Thanks to its ability to provide a continuous data set from an extended area along a road, this TLS system is useful for rockfall hazard assessment. In addition, this new scanner decrease considerably the time spent in the field and the postprocessing is reduced thanks to resultant georeferenced data. Nevertheless, its application is limited to an area close to the road. The LYNX has been tested near Pontarlier (France) along roads sections affected by rockfall. Regarding to the tectonic context, the studied area is located in the Folded Jura mainly composed of limestone. The result is a very detailed point cloud with a point spacing of 4 cm. The LYNX presents detailed topography on which a structural analysis has been carried out using COLTOP-3D. It allows obtaining a full structural description along the road. In addition, kinematic tests coupled with probabilistic analysis give a susceptibility map of the road cut or natural cliffs above the road. Comparisons with field survey confirm the Lidar approach.

  14. Non-invasive cortisol measurements as indicators of physiological stress responses in guinea pigs

    PubMed Central

    Pschernig, Elisabeth; Wallner, Bernard; Millesi, Eva

    2016-01-01

    Non-invasive measurements of glucocorticoid (GC) concentrations, including cortisol and corticosterone, serve as reliable indicators of adrenocortical activities and physiological stress loads in a variety of species. As an alternative to invasive analyses based on plasma, GC concentrations in saliva still represent single-point-of-time measurements, suitable for studying short-term or acute stress responses, whereas fecal GC metabolites (FGMs) reflect overall stress loads and stress responses after a species-specific time frame in the long-term. In our study species, the domestic guinea pig, GC measurements are commonly used to indicate stress responses to different environmental conditions, but the biological relevance of non-invasive measurements is widely unknown. We therefore established an experimental protocol based on the animals’ natural stress responses to different environmental conditions and compared GC levels in plasma, saliva, and fecal samples during non-stressful social isolations and stressful two-hour social confrontations with unfamiliar individuals. Plasma and saliva cortisol concentrations were significantly increased directly after the social confrontations, and plasma and saliva cortisol levels were strongly correlated. This demonstrates a high biological relevance of GC measurements in saliva. FGM levels measured 20 h afterwards, representing the reported mean gut passage time based on physiological validations, revealed that the overall stress load was not affected by the confrontations, but also no relations to plasma cortisol levels were detected. We therefore measured FGMs in two-hour intervals for 24 h after another social confrontation and detected significantly increased levels after four to twelve hours, reaching peak concentrations already after six hours. Our findings confirm that non-invasive GC measurements in guinea pigs are highly biologically relevant in indicating physiological stress responses compared to circulating levels in plasma in the short- and long-term. Our approach also underlines the importance of detailed investigations on how to use and interpret non-invasive measurements, including the determination of appropriate time points for sample collections. PMID:26839750

  15. Gravity and the geoid in the Nepal Himalaya

    NASA Technical Reports Server (NTRS)

    Bilham, Roger

    1992-01-01

    Materials within the Himalaya are rising due to convergence between India and Asia. If the rate of erosion is comparable to the rate of uplift the mean surface elevation will remain constant. Any slight imbalance in these two processes will lead to growth or attrition of the Himalaya. The process of uplift of materials within the Himalaya coupled with surface erosion is similar to the advance of a glacier into a region of melting. If the melting rate exceeds the rate of downhill motion of the glacier then the terminus of the glacier will receed up-valley despite the downhill motion of the bulk of the glacier. Thus although buried rocks, minerals and surface control points in the Himalaya are undoubtably rising, the growth or collapse of the Himalaya depends on the erosion rate which is invisible to geodetic measurements. Erosion rates are currently estimated from suspended sediment loads in rivers in the Himalaya. These typically underestimate the real erosion rate since bed-load is not measured during times of heavy flood, and it is difficult to integrate widely varying suspended load measurements over many years. An alternative way to measure erosion rate is to measure the rate of change of gravity in a region of uplift. If a control point moves vertically it should be accompanied by a reduction in gravity as the point moves away from the Earth's center of mass. There is a difference in the change of gravity between uplift with and without erosion corresponding to the difference between the free-air gradient and the gradient in the acceleration due to gravity caused by a corresponding thickness of rock. Essentially gravity should change precisely in accord with a change in elevation of the point in a free-air gradient if erosion equals uplift rate. We were funded by NASA to undertake a measurement of absolute gravity simultaneously with measurements of GPS height within the Himalaya. Since both absolute gravity and time are known in an absolute sense to 1 part in 10(exp 10) it is possible to estimate gravity with a precision of 0.1 mu gal. Known systematic errors reduce the measurement to an absolute uncertainty of 6 mu gal. The free air gradient at the point of measurement is typically about 3 mu gals/cm. At Simikot where our experiment was conducted we determined a vertical gravity gradient of 4.4 mu gals/cm.

  16. Does self-efficacy mediate the relationship between transformational leadership behaviours and healthcare workers' sleep quality? A longitudinal study.

    PubMed

    Munir, Fehmidah; Nielsen, Karina

    2009-09-01

    This paper is a report of a study conducted to investigate the longitudinal relationship between transformational leadership behaviours and employees' sleep quality, and the mediating effects of self-efficacy. Although there is evidence for the influential role of transformational leadership on health outcomes, researchers have used either attitude outcomes (e.g. job satisfaction) or softer health measures, such as general well-being. Specific measures of well-being such as sleep quality have not been used, despite its association with working conditions. A longitudinal design was used to collect data from Danish healthcare workers at time 1 in 2005 (n = 447) and 18 months later at time 2 in 2007 (n = 274). Structural equation modelling was used to investigate the relationships between transformational leadership, self-efficacy and sleep quality at both time points independently (cross-sectionally) and longitudinally. For all constructs, time 2 measures were influenced by the baseline level. Direct relationships between transformational leadership and sleep quality were found. This relationship was negative cross-sectionally at both time points, but positive between baseline and follow-up. The relationship between leadership and employees' sleep quality was not mediated by employees' self-efficacy. Our results indicate that training managers in transformational leadership behaviours may have a positive impact on healthcare workers' health over time. However, more research is needed to examine the mechanisms by which transformational leadership brings about improved sleep quality; self-efficacy was not found to be the explanation.

  17. Commissioning Varian enhanced dynamic wedge in the PINNACLE treatment planning system using Gafchromic EBT film.

    PubMed

    Fontanarosa, Davide; Orlandini, Lucia Clara; Andriani, Italo; Bernardi, Luca

    2009-10-01

    In external photon beam therapies, the technique of dynamic wedge is a well established method for dose inhomogeneity compensation. The introduction of the enhanced dynamic wedge (EDW) on Varian LINACs considerably improved the pre-existing techniques. In the process of commissioning a Varian LINAC into a PINNACLE3 treatment planning system (TPS), the user is required to import quite a few measurements of EDW profiles and percentage depth doses (PDDs). Standard measurement devices like ionization chambers in a water phantom are not the most indicated ones for this situation where each measurement point is obtained by integrating during the entire exposure: Measurements would result to be a very laborious and time consuming operation, most of the times not practically possible. The goal of the present work is to introduce an alternative and hands-on procedure to perform the measurements using a combination of GafchromicTM EBT films, irradiated sideways in one single shot for both profiles and PDDs, and a single standard ionization chamber. The scanned profiles obtained at different depths have easily been imported in the TPS; for the PDD measurements, a correction was proven necessary to account for a "self-shielding" effect introduced by the presence of the films themselves, when irradiated sideways, resulting in an underestimation of the dose at deeper depths. A correction curve was derived comparing TPS open field validated measurements with the curves extracted from GafchromicTM EBT films. Finally, the curve was applied to all the wedged fields PDD measurements and could minimize the errors. The comparison for the 15 MV photon beam between the measured and the calculated 48 profiles and 12 PDDs (field sizes from 5 x 5 to 20 x 20 cm2, wedge angles ranging from 15 degrees to 60 degrees) was acceptable. The confidence limit (CL) was used as fit indicator, as suggested by the ESTRO Booklet No. 7: For the investigated PDDs the maximum value was 6.40 in the build up region and 2.83 beyond the maximum dose point; regarding cross beam profiles, the CLs were below 3 for 85% of the points within the field and for 96% of the points outside the field; in the penumbra region, a more appropriate parameter to evaluate the agreement between calculated and measured doses is the distance to agreement; only 73% of the profiles had CLs below 15, but for the remaining ones, distance-to-agreement values were within 3 mm. A hundred ionization chamber point dose measurements (for square and elongated fields at different depth and for points in field and out of field) were performed in a water phantom and 98 of them confirmed the TPS calculations with differences lower than 3% and considerably lower in most of the cases. This article gives valuable guidance and insight to other physicists attempting to approach EDW commissioning in the PINNACLE TPS software using Gafchromic EBT films.

  18. Measurement of flow separation in a human vocal folds model

    NASA Astrophysics Data System (ADS)

    Šidlof, Petr; Doaré, Olivier; Cadot, Olivier; Chaigne, Antoine

    2011-07-01

    The paper provides experimental data on flow separation from a model of the human vocal folds. Data were measured on a four times scaled physical model, where one vocal fold was fixed and the other oscillated due to fluid-structure interaction. The vocal folds were fabricated from silicone rubber and placed on elastic support in the wall of a transparent wind tunnel. A PIV system was used to visualize the flow fields immediately downstream of the glottis and to measure the velocity fields. From the visualizations, the position of the flow separation point was evaluated using a semiautomatic procedure and plotted for different airflow velocities. The separation point position was quantified relative to the orifice width separately for the left and right vocal folds to account for flow asymmetry. The results indicate that the flow separation point remains close to the narrowest cross-section during most of the vocal fold vibration cycle, but moves significantly further downstream shortly prior to and after glottal closure.

  19. Test-retest reliability and minimal detectable change of two simplified 3-point balance measures in patients with stroke.

    PubMed

    Chen, Yi-Miau; Huang, Yi-Jing; Huang, Chien-Yu; Lin, Gong-Hong; Liaw, Lih-Jiun; Lee, Shih-Chieh; Hsieh, Ching-Lin

    2017-10-01

    The 3-point Berg Balance Scale (BBS-3P) and 3-point Postural Assessment Scale for Stroke Patients (PASS-3P) were simplified from the BBS and PASS to overcome the complex scoring systems. The BBS-3P and PASS-3P were more feasible in busy clinical practice and showed similarly sound validity and responsiveness to the original measures. However, the reliability of the BBS-3P and PASS-3P is unknown limiting their utility and the interpretability of scores. We aimed to examine the test-retest reliability and minimal detectable change (MDC) of the BBS-3P and PASS-3P in patients with stroke. Cross-sectional study. The rehabilitation departments of a medical center and a community hospital. A total of 51 chronic stroke patients (64.7% male). Both balance measures were administered twice 7 days apart. The test-retest reliability of both the BBS-3P and PASS-3P were examined by intraclass correlation coefficients (ICC). The MDC and its percentage over the total score (MDC%) of each measure was calculated for examining the random measurement errors. The ICC values of the BBS-3P and PASS-3P were 0.99 and 0.97, respectively. The MDC% (MDC) of the BBS-3P and PASS-3P were 9.1% (5.1 points) and 8.4% (3.0 points), respectively, indicating that both measures had small and acceptable random measurement errors. Our results showed that both the BBS-3P and the PASS-3P had good test-retest reliability, with small and acceptable random measurement error. These two simplified 3-level balance measures can provide reliable results over time. Our findings support the repeated administration of the BBS-3P and PASS-3P to monitor the balance of patients with stroke. The MDC values can help clinicians and researchers interpret the change scores more precisely.

  20. Using digital image correlation and three dimensional point tracking in conjunction with real time operating data expansion techniques to predict full-field dynamic strain

    NASA Astrophysics Data System (ADS)

    Avitabile, Peter; Baqersad, Javad; Niezrecki, Christopher

    2014-05-01

    Large structures pose unique difficulties in the acquisition of measured dynamic data with conventional techniques that are further complicated when the structure also has rotating members such as wind turbine blades and helicopter blades. Optical techniques (digital image correlation and dynamic point tracking) are used to measure line of sight data without the need to contact the structure, eliminating cumbersome cabling issues. The data acquired from these optical approaches are used in conjunction with a unique real time operating data expansion process to obtain full-field dynamic displacement and dynamic strain. The measurement approaches are described in this paper along with the expansion procedures. The data is collected for a single blade from a wind turbine and also for a three bladed assembled wind turbine configuration. Measured strains are compared to results from a limited set of optical measurements used to perform the expansion to obtain full-field strain results including locations that are not available from the line of sight measurements acquired. The success of the approach clearly shows that there are some very extraordinary possibilities that exist to provide very desperately needed full field displacement and strain information that can be used to help identify the structural health of structures.

Top