A Well-Clear Volume Based on Time to Entry Point
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony J.; Munoz, Cesar A.; Upchurch, Jason M.; Chamberlain, James P.; Consiglio, Maria C.
2014-01-01
A well-clear volume is a key component of NASA's Separation Assurance concept for the integration of UAS in the NAS. This paper proposes a mathematical definition of the well-clear volume that uses, in addition to distance thresholds, a time threshold based on time to entry point (TEP). The mathematical model that results from this definition is more conservative than other candidate definitions of the wellclear volume that are based on range over closure rate and time to closest point of approach.
A travel time forecasting model based on change-point detection method
NASA Astrophysics Data System (ADS)
LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei
2017-06-01
Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Koppelmans, Vincent; Erdeniz, Burak; De Dios, Yiri E; Wood, Scott J; Reuter-Lorenz, Patricia A; Kofman, Igor; Bloomberg, Jacob J; Mulavara, Ajitkumar P; Seidler, Rachael D
2013-12-18
Long duration spaceflight (i.e., 22 days or longer) has been associated with changes in sensorimotor systems, resulting in difficulties that astronauts experience with posture control, locomotion, and manual control. The microgravity environment is an important causal factor for spaceflight induced sensorimotor changes. Whether spaceflight also affects other central nervous system functions such as cognition is yet largely unknown, but of importance in consideration of the health and performance of crewmembers both in- and post-flight. We are therefore conducting a controlled prospective longitudinal study to investigate the effects of spaceflight on the extent, longevity and neural bases of sensorimotor and cognitive performance changes. Here we present the protocol of our study. This study includes three groups (astronauts, bed rest subjects, ground-based control subjects) for which each the design is single group with repeated measures. The effects of spaceflight on the brain will be investigated in astronauts who will be assessed at two time points pre-, at three time points during-, and at four time points following a spaceflight mission of six months. To parse out the effect of microgravity from the overall effects of spaceflight, we investigate the effects of seventy days head-down tilted bed rest. Bed rest subjects will be assessed at two time points before-, two time points during-, and three time points post-bed rest. A third group of ground based controls will be measured at four time points to assess reliability of our measures over time. For all participants and at all time points, except in flight, measures of neurocognitive performance, fine motor control, gait, balance, structural MRI (T1, DTI), task fMRI, and functional connectivity MRI will be obtained. In flight, astronauts will complete some of the tasks that they complete pre- and post flight, including tasks measuring spatial working memory, sensorimotor adaptation, and fine motor performance. Potential changes over time and associations between cognition, motor-behavior, and brain structure and function will be analyzed. This study explores how spaceflight induced brain changes impact functional performance. This understanding could aid in the design of targeted countermeasures to mitigate the negative effects of long-duration spaceflight.
12 CFR 327.11 - Special assessments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... basis points based on the institution's total assets less Tier 1 capital as reported on the report of... exceed 10 basis points times the institution's assessment base for the second quarter 2009 risk-based... or below zero at the end of a calendar quarter, a special assessment of up to 5 basis points on total...
Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia
2017-07-28
Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Lam, William H. K.; Li, Qingquan
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
New fast DCT algorithms based on Loeffler's factorization
NASA Astrophysics Data System (ADS)
Hong, Yoon Mi; Kim, Il-Koo; Lee, Tammy; Cheon, Min-Su; Alshina, Elena; Han, Woo-Jin; Park, Jeong-Hoon
2012-10-01
This paper proposes a new 32-point fast discrete cosine transform (DCT) algorithm based on the Loeffler's 16-point transform. Fast integer realizations of 16-point and 32-point transforms are also provided based on the proposed transform. For the recent development of High Efficiency Video Coding (HEVC), simplified quanti-zation and de-quantization process are proposed. Three different forms of implementation with the essentially same performance, namely matrix multiplication, partial butterfly, and full factorization can be chosen accord-ing to the given platform. In terms of the number of multiplications required for the realization, our proposed full-factorization is 3~4 times faster than a partial butterfly, and about 10 times faster than direct matrix multiplication.
NASA Astrophysics Data System (ADS)
Kim, Byung Chan; Park, Seong-Ook
In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.
Confidence intervals for the first crossing point of two hazard functions.
Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng
2009-12-01
The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.
2013-01-01
Background Long duration spaceflight (i.e., 22 days or longer) has been associated with changes in sensorimotor systems, resulting in difficulties that astronauts experience with posture control, locomotion, and manual control. The microgravity environment is an important causal factor for spaceflight induced sensorimotor changes. Whether spaceflight also affects other central nervous system functions such as cognition is yet largely unknown, but of importance in consideration of the health and performance of crewmembers both in- and post-flight. We are therefore conducting a controlled prospective longitudinal study to investigate the effects of spaceflight on the extent, longevity and neural bases of sensorimotor and cognitive performance changes. Here we present the protocol of our study. Methods/design This study includes three groups (astronauts, bed rest subjects, ground-based control subjects) for which each the design is single group with repeated measures. The effects of spaceflight on the brain will be investigated in astronauts who will be assessed at two time points pre-, at three time points during-, and at four time points following a spaceflight mission of six months. To parse out the effect of microgravity from the overall effects of spaceflight, we investigate the effects of seventy days head-down tilted bed rest. Bed rest subjects will be assessed at two time points before-, two time points during-, and three time points post-bed rest. A third group of ground based controls will be measured at four time points to assess reliability of our measures over time. For all participants and at all time points, except in flight, measures of neurocognitive performance, fine motor control, gait, balance, structural MRI (T1, DTI), task fMRI, and functional connectivity MRI will be obtained. In flight, astronauts will complete some of the tasks that they complete pre- and post flight, including tasks measuring spatial working memory, sensorimotor adaptation, and fine motor performance. Potential changes over time and associations between cognition, motor-behavior, and brain structure and function will be analyzed. Discussion This study explores how spaceflight induced brain changes impact functional performance. This understanding could aid in the design of targeted countermeasures to mitigate the negative effects of long-duration spaceflight. PMID:24350728
NASA Astrophysics Data System (ADS)
Huynh, Benjamin Q.; Antropova, Natasha; Giger, Maryellen L.
2017-03-01
DCE-MRI datasets have a temporal aspect to them, resulting in multiple regions of interest (ROIs) per subject, based on contrast time points. It is unclear how the different contrast time points vary in terms of usefulness for computer-aided diagnosis tasks in conjunction with deep learning methods. We thus sought to compare the different DCE-MRI contrast time points with regard to how well their extracted features predict response to neoadjuvant chemotherapy within a deep convolutional neural network. Our dataset consisted of 561 ROIs from 64 subjects. Each subject was categorized as a non-responder or responder, determined by recurrence-free survival. First, features were extracted from each ROI using a convolutional neural network (CNN) pre-trained on non-medical images. Linear discriminant analysis classifiers were then trained on varying subsets of these features, based on their contrast time points of origin. Leave-one-out cross validation (by subject) was used to assess performance in the task of estimating probability of response to therapy, with area under the ROC curve (AUC) as the metric. The classifier trained on features from strictly the pre-contrast time point performed the best, with an AUC of 0.85 (SD = 0.033). The remaining classifiers resulted in AUCs ranging from 0.71 (SD = 0.028) to 0.82 (SD = 0.027). Overall, we found the pre-contrast time point to be the most effective at predicting response to therapy and that including additional contrast time points moderately reduces variance.
Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease
Jie, Biao; Liu, Mingxia; Liu, Jun
2016-01-01
Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313
76 FR 41454 - Caribbean Fishery Management Council; Scoping Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-14
... based on alternative selected in Action 3(a) and time series of landings data as defined in Action 1(a...., Puerto Rico, St. Thomas/St. John, St. Croix) based on the preferred management reference point time series selected by the Council in Actions 1(a) and 2(a). Alternative 2A. Use a mid-point or equidistant...
Real-time EEG-based detection of fatigue driving danger for accident prediction.
Wang, Hong; Zhang, Chi; Shi, Tianwei; Wang, Fuwang; Ma, Shujun
2015-03-01
This paper proposes a real-time electroencephalogram (EEG)-based detection method of the potential danger during fatigue driving. To determine driver fatigue in real time, wavelet entropy with a sliding window and pulse coupled neural network (PCNN) were used to process the EEG signals in the visual area (the main information input route). To detect the fatigue danger, the neural mechanism of driver fatigue was analyzed. The functional brain networks were employed to track the fatigue impact on processing capacity of brain. The results show the overall functional connectivity of the subjects is weakened after long time driving tasks. The regularity is summarized as the fatigue convergence phenomenon. Based on the fatigue convergence phenomenon, we combined both the input and global synchronizations of brain together to calculate the residual amount of the information processing capacity of brain to obtain the dangerous points in real time. Finally, the danger detection system of the driver fatigue based on the neural mechanism was validated using accident EEG. The time distributions of the output danger points of the system have a good agreement with those of the real accident points.
Liu, Wanli
2017-03-08
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.
A novel method for vaginal cylinder treatment planning: a seamless transition to 3D brachytherapy
Wu, Vincent; Wang, Zhou; Patil, Sachin
2012-01-01
Purpose Standard treatment plan libraries are often used to ensure a quick turn-around time for vaginal cylinder treatments. Recently there is increasing interest in transitioning from conventional 2D radiograph based brachytherapy to 3D image based brachytherapy, which has resulted in a substantial increase in treatment planning time and decrease in patient through-put. We describe a novel technique that significantly reduces the treatment planning time for CT-based vaginal cylinder brachytherapy. Material and methods Oncentra MasterPlan TPS allows multiple sets of data points to be classified as applicator points which has been harnessed in this method. The method relies on two hard anchor points: the first dwell position in a catheter and an applicator configuration specific dwell position as the plan origin and a soft anchor point beyond the last active dwell position to define the axis of the catheter. The spatial location of various data points on the applicator's surface and at 5 mm depth are stored in an Excel file that can easily be transferred into a patient CT data set using window operations and then used for treatment planning. The remainder of the treatment planning process remains unaffected. Results The treatment plans generated on the Oncentra MasterPlan TPS using this novel method yielded results comparable to those generated on the Plato TPS using a standard treatment plan library in terms of treatment times, dwell weights and dwell times for a given optimization method and normalization points. Less than 2% difference was noticed between the treatment times generated between both systems. Using the above method, the entire planning process, including CT importing, catheter reconstruction, multiple data point definition, optimization and dose prescription, can be completed in ~5–10 minutes. Conclusion The proposed method allows a smooth and efficient transition to 3D CT based vaginal cylinder brachytherapy planning. PMID:23349650
Lang, Paul Z; Thulasi, Praneetha; Khandelwal, Sumitra S; Hafezi, Farhad; Randleman, J Bradley
2018-05-02
To evaluate the correlation between anterior axial curvature difference maps following corneal cross-linking (CXL) for progressive keratoconus obtained from Scheimpflug-based tomography and Placido-based topography. Between-devices reliability analysis of randomized clinical trial data METHODS: Corneal imaging was collected at a single center institution pre-operatively and at 3, 6, and 12 months post-operatively using Scheimpflug-based tomography (Pentacam, Oculus Inc., Lynnwood, WA) and Scanning-slit, Placido-based topography (Orbscan II, Bausch & Lomb, Rochester, NY) in patients with progressive keratoconus receiving standard protocol CXL (3mW/cm 2 for 30 minutes). Regularization index (RI), absolute maximum keratometry (K Max), and change in (ΔK Max) were compared between the two devices at each time point. 51 eyes from 36 patients were evaluated at all time points. values were significantly different at all time points [56.01±5.3D Scheimpflug vs. 55.04±5.1D scanning-slit pre-operatively (p=0.003); 54.58±5.3D Scheimpflug vs. 53.12±4.9D scanning-slit at 12 months (p<0.0001)] but strongly correlated between devices (r=0.90-0.93) at all time points. The devices were not significantly different at any time point for either ΔK Max or RI but were poorly correlated at all time points (r=0.41-0.53 for ΔK Max, r=0.29-0.48 for RI). At 12 months, 95% LOA was 7.51D for absolute, 8.61D for ΔK Max, and 19.86D for RI. Measurements using Scheimpflug and scanning-slit Placido-based technology are correlated but not interchangeable. Both devices appear reasonable for separately monitoring the cornea's response to CXL; however, caution should be used when comparing results obtained with one measuring technology to the other. Copyright © 2018 Elsevier Inc. All rights reserved.
A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform
Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.
2013-01-01
Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014
Chhabra, Anmol; Quinn, Andrea; Ries, Amanda
2018-01-01
Accurate history collection is integral to medication reconciliation. Studies support pharmacy involvement in the process, but assessment of global time spent is limited. The authors hypothesized the location of a medication-focused interview would impact time spent. The objective was to compare time spent by pharmacists and nurses based on the location of a medication-focused interview. Time spent by the interviewing pharmacist, admitting nurse, and centralized pharmacist verifying admission orders was collected. Patient groups were based on whether the interview was conducted in the emergency department (ED) or medical floor. The primary end point was a composite of the 3 time points. Secondary end points were individual time components and number and types of transcription discrepancies identified during medical floor interviews. Pharmacists and nurses spent an average of ten fewer minutes per ED patient versus a medical floor patient ( P = .028). Secondary end points were not statistically significant. Transcription discrepancies were identified at a rate of 1 in 4 medications. Post hoc analysis revealed the time spent by pharmacists and nurses was 2.4 minutes shorter per medication when interviewed in the ED ( P < .001). The primary outcome was statistically and clinically significant. Limitations included inability to blind and lack of cost-saving analysis. Pharmacist involvement in ED medication reconciliation leads to time savings during the admission process.
Feature-based attention to unconscious shapes and colors.
Schmidt, Filipp; Schmidt, Thomas
2010-08-01
Two experiments employed feature-based attention to modulate the impact of completely masked primes on subsequent pointing responses. Participants processed a color cue to select a pair of possible pointing targets out of multiple targets on the basis of their color, and then pointed to the one of those two targets with a prespecified shape. All target pairs were preceded by prime pairs triggering either the correct or the opposite response. The time interval between cue and primes was varied to modulate the time course of feature-based attentional selection. In a second experiment, the roles of color and shape were switched. Pointing trajectories showed large priming effects that were amplified by feature-based attention, indicating that attention modulated the earliest phases of motor output. Priming effects as well as their attentional modulation occurred even though participants remained unable to identify the primes, indicating distinct processes underlying visual awareness, attention, and response control.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2017-05-09
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Autoregressive-model-based missing value estimation for DNA microarray time series data.
Choong, Miew Keen; Charbit, Maurice; Yan, Hong
2009-01-01
Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.
Liu, Wanli
2017-01-01
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated. PMID:28282897
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications
Moussa, Adel; El-Sheimy, Naser; Habib, Ayman
2017-01-01
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.
Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman
2017-10-18
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.
NASA Astrophysics Data System (ADS)
Nakatsuji, Noriaki; Matsushima, Kyoji
2017-03-01
Full-parallax high-definition CGHs composed of more than billion pixels were so far created only by the polygon-based method because of its high performance. However, GPUs recently allow us to generate CGHs much faster by the point cloud. In this paper, we measure computation time of object fields for full-parallax high-definition CGHs, which are composed of 4 billion pixels and reconstruct the same scene, by using the point cloud with GPU and the polygon-based method with CPU. In addition, we compare the optical and simulated reconstructions between CGHs created by these techniques to verify the image quality.
Garcia-Vicente, Ana María; Molina, David; Pérez-Beteta, Julián; Amo-Salas, Mariano; Martínez-González, Alicia; Bueno, Gloria; Tello-Galán, María Jesús; Soriano-Castrejón, Ángel
2017-12-01
To study the influence of dual time point 18F-FDG PET/CT in textural features and SUV-based variables and their relation among them. Fifty-six patients with locally advanced breast cancer (LABC) were prospectively included. All of them underwent a standard 18F-FDG PET/CT (PET-1) and a delayed acquisition (PET-2). After segmentation, SUV variables (SUVmax, SUVmean, and SUVpeak), metabolic tumor volume (MTV), and total lesion glycolysis (TLG) were obtained. Eighteen three-dimensional (3D) textural measures were computed including: run-length matrices (RLM) features, co-occurrence matrices (CM) features, and energies. Differences between all PET-derived variables obtained in PET-1 and PET-2 were studied. Significant differences were found between the SUV-based parameters and MTV obtained in the dual time point PET/CT, with higher values of SUV-based variables and lower MTV in the PET-2 with respect to the PET-1. In relation with the textural parameters obtained in dual time point acquisition, significant differences were found for the short run emphasis, low gray-level run emphasis, short run high gray-level emphasis, run percentage, long run emphasis, gray-level non-uniformity, homogeneity, and dissimilarity. Textural variables showed relations with MTV and TLG. Significant differences of textural features were found in dual time point 18F-FDG PET/CT. Thus, a dynamic behavior of metabolic characteristics should be expected, with higher heterogeneity in delayed PET acquisition compared with the standard PET. A greater heterogeneity was found in bigger tumors.
An automated model-based aim point distribution system for solar towers
NASA Astrophysics Data System (ADS)
Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen
2016-05-01
Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.
Floating-to-Fixed-Point Conversion for Digital Signal Processors
NASA Astrophysics Data System (ADS)
Menard, Daniel; Chillet, Daniel; Sentieys, Olivier
2006-12-01
Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.
An adaptive clustering algorithm for image matching based on corner feature
NASA Astrophysics Data System (ADS)
Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song
2018-04-01
The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
TeleHealth networks: Instant messaging and point-to-point communication over the internet
NASA Astrophysics Data System (ADS)
Sachpazidis, Ilias; Ohl, Roland; Kontaxakis, George; Sakas, Georgios
2006-12-01
This paper explores the advantages and disadvantages of a medical network based on point-to-point communication and a medical network based on Jabber instant messaging protocol. Instant messaging might be, for many people, a convenient way of chatting over the Internet. We will attempt to illustrate how an instant messaging protocol could serve in the best way medical services and provide great flexibility to the involved parts. Additionally, the directory services and presence status offered by the Jabber protocol make it very attractive to medical applications that need to have real time and store and forward communication. Furthermore, doctors connected to Internet via high-speed networks could benefit by saving time due to the data transmission acceleration over Jabber.
Marked point process for modelling seismic activity (case study in Sumatra and Java)
NASA Astrophysics Data System (ADS)
Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.
2018-05-01
Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.
NASA Astrophysics Data System (ADS)
Pahlavani, P.; Gholami, A.; Azimi, S.
2017-09-01
This paper presents an indoor positioning technique based on a multi-layer feed-forward (MLFF) artificial neural networks (ANN). Most of the indoor received signal strength (RSS)-based WLAN positioning systems use the fingerprinting technique that can be divided into two phases: the offline (calibration) phase and the online (estimation) phase. In this paper, RSSs were collected for all references points in four directions and two periods of time (Morning and Evening). Hence, RSS readings were sampled at a regular time interval and specific orientation at each reference point. The proposed ANN based model used Levenberg-Marquardt algorithm for learning and fitting the network to the training data. This RSS readings in all references points and the known position of these references points was prepared for training phase of the proposed MLFF neural network. Eventually, the average positioning error for this network using 30% check and validation data was computed approximately 2.20 meter.
Analyzing survival curves at a fixed point in time for paired and clustered right-censored data
Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De
2018-01-01
In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280
Riemannian multi-manifold modeling and clustering in brain networks
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Salsabilian, Shiva; Wack, David S.; Muldoon, Sarah F.; Baidoo-Williams, Henry E.; Vettel, Jean M.; Cieslak, Matthew; Grafton, Scott T.
2017-08-01
This paper introduces Riemannian multi-manifold modeling in the context of brain-network analytics: Brainnetwork time-series yield features which are modeled as points lying in or close to a union of a finite number of submanifolds within a known Riemannian manifold. Distinguishing disparate time series amounts thus to clustering multiple Riemannian submanifolds. To this end, two feature-generation schemes for brain-network time series are put forth. The first one is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of appropriately defined observability matrices, to points into the Grassmann manifold. The second one utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positivedefinite matrices. Based on recently developed research on clustering Riemannian submanifolds, an algorithm is provided for distinguishing time series based on their Riemannian-geometry properties. Numerical tests on time series, synthetically generated from real brain-network structural connectivity matrices, reveal that the proposed scheme outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
Stein, Marjorie W; Frank, Susan J; Roberts, Jeffrey H; Finkelstein, Malka; Heo, Moonseong
2016-05-01
The aim of this study was to determine whether group-based or didactic teaching is more effective to teach ACR Appropriateness Criteria to medical students. An identical pretest, posttest, and delayed multiple-choice test was used to evaluate the efficacy of the two teaching methods. Descriptive statistics comparing test scores were obtained. On the posttest, the didactic group gained 12.5 points (P < .0001), and the group-based learning students gained 16.3 points (P < .0001). On the delayed test, the didactic group gained 14.4 points (P < .0001), and the group-based learning students gained 11.8 points (P < .001). The gains in scores on both tests were statistically significant for both groups. However, the differences in scores were not statistically significant comparing the two educational methods. Compared with didactic lectures, group-based learning is more enjoyable, time efficient, and equally efficacious. The choice of educational method can be individualized for each institution on the basis of group size, time constraints, and faculty availability. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less
Selecting the most appropriate time points to profile in high-throughput studies
Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv
2017-01-01
Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972
Research of real-time communication software
NASA Astrophysics Data System (ADS)
Li, Maotang; Guo, Jingbo; Liu, Yuzhong; Li, Jiahong
2003-11-01
Real-time communication has been playing an increasingly important role in our work, life and ocean monitor. With the rapid progress of computer and communication technique as well as the miniaturization of communication system, it is needed to develop the adaptable and reliable real-time communication software in the ocean monitor system. This paper involves the real-time communication software research based on the point-to-point satellite intercommunication system. The object-oriented design method is adopted, which can transmit and receive video data and audio data as well as engineering data by satellite channel. In the real-time communication software, some software modules are developed, which can realize the point-to-point satellite intercommunication in the ocean monitor system. There are three advantages for the real-time communication software. One is that the real-time communication software increases the reliability of the point-to-point satellite intercommunication system working. Second is that some optional parameters are intercalated, which greatly increases the flexibility of the system working. Third is that some hardware is substituted by the real-time communication software, which not only decrease the expense of the system and promotes the miniaturization of communication system, but also aggrandizes the agility of the system.
NASA Astrophysics Data System (ADS)
Javadi, Maryam; Shahrabi, Jamal
2014-03-01
The problems of facility location and the allocation of demand points to facilities are crucial research issues in spatial data analysis and urban planning. It is very important for an organization or governments to best locate its resources and facilities and efficiently manage resources to ensure that all demand points are covered and all the needs are met. Most of the recent studies, which focused on solving facility location problems by performing spatial clustering, have used the Euclidean distance between two points as the dissimilarity function. Natural obstacles, such as mountains and rivers, can have drastic impacts on the distance that needs to be traveled between two geographical locations. While calculating the distance between various supply chain entities (including facilities and demand points), it is necessary to take such obstacles into account to obtain better and more realistic results regarding location-allocation. In this article, new models were presented for location of urban facilities while considering geographical obstacles at the same time. In these models, three new distance functions were proposed. The first function was based on the analysis of shortest path in linear network, which was called SPD function. The other two functions, namely PD and P2D, were based on the algorithms that deal with robot geometry and route-based robot navigation in the presence of obstacles. The models were implemented in ArcGIS Desktop 9.2 software using the visual basic programming language. These models were evaluated using synthetic and real data sets. The overall performance was evaluated based on the sum of distance from demand points to their corresponding facilities. Because of the distance between the demand points and facilities becoming more realistic in the proposed functions, results indicated desired quality of the proposed models in terms of quality of allocating points to centers and logistic cost. Obtained results show promising improvements of the allocation, the logistics costs and the response time. It can also be inferred from this study that the P2D-based model and the SPD-based model yield similar results in terms of the facility location and the demand allocation. It is noted that the P2D-based model showed better execution time than the SPD-based model. Considering logistic costs, facility location and response time, the P2D-based model was appropriate choice for urban facility location problem considering the geographical obstacles.
A wavefront orientation method for precise numerical determination of tsunami travel time
NASA Astrophysics Data System (ADS)
Fine, I. V.; Thomson, R. E.
2013-04-01
We present a highly accurate and computationally efficient method (herein, the "wavefront orientation method") for determining the travel time of oceanic tsunamis. Based on Huygens principle, the method uses an eight-point grid-point pattern and the most recent information on the orientation of the advancing wave front to determine the time for a tsunami to travel to a specific oceanic location. The method is shown to provide improved accuracy and reduced anisotropy compared with the conventional multiple grid-point method presently in widespread use.
A Time-Domain CMOS Oscillator-Based Thermostat with Digital Set-Point Programming
Chen, Chun-Chi; Lin, Shih-Hao
2013-01-01
This paper presents a time-domain CMOS oscillator-based thermostat with digital set-point programming [without a digital-to-analog converter (DAC) or external resistor] to achieve on-chip thermal management of modern VLSI systems. A time-domain delay-line-based thermostat with multiplexers (MUXs) was used to substantially reduce the power consumption and chip size, and can benefit from the performance enhancement due to the scaling down of fabrication processes. For further cost reduction and accuracy enhancement, this paper proposes a thermostat using two oscillators that are suitable for time-domain curvature compensation instead of longer linear delay lines. The final time comparison was achieved using a time comparator with a built-in custom hysteresis to generate the corresponding temperature alarm and control. The chip size of the circuit was reduced to 0.12 mm2 in a 0.35-μm TSMC CMOS process. The thermostat operates from 0 to 90 °C, and achieved a fine resolution better than 0.05 °C and an improved inaccuracy of ± 0.6 °C after two-point calibration for eight packaged chips. The power consumption was 30 μW at a sample rate of 10 samples/s. PMID:23385403
A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...
Local Stability of AIDS Epidemic Model Through Treatment and Vertical Transmission with Time Delay
NASA Astrophysics Data System (ADS)
Novi W, Cascarilla; Lestari, Dwi
2016-02-01
This study aims to explain stability of the spread of AIDS through treatment and vertical transmission model. Human with HIV need a time to positively suffer AIDS. The existence of a time, human with HIV until positively suffer AIDS can be delayed for a time so that the model acquired is the model with time delay. The model form is a nonlinear differential equation with time delay, SIPTA (susceptible-infected-pre AIDS-treatment-AIDS). Based on SIPTA model analysis results the disease free equilibrium point and the endemic equilibrium point. The disease free equilibrium point with and without time delay are local asymptotically stable if the basic reproduction number is less than one. The endemic equilibrium point will be local asymptotically stable if the time delay is less than the critical value of delay, unstable if the time delay is more than the critical value of delay, and bifurcation occurs if the time delay is equal to the critical value of delay.
Turning Points: Priorities for Teacher Education in a Democracy
ERIC Educational Resources Information Center
Romano, Rosalie M.
2009-01-01
Every generation has its moment, some turning point that will mark its place in the historical record. Such points provide the direction of our history and our future. Turning points are, characteristically, times of turmoil based on a fundamental change in models or events--what Thomas Kuhn called a "paradigm shift." In terms of a democratic…
2011-01-01
Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598
Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp
2011-08-18
Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.
ERIC Educational Resources Information Center
Klotsche, Jens; Gloster, Andrew T.
2012-01-01
Longitudinal studies are increasingly common in psychological research. Characterized by repeated measurements, longitudinal designs aim to observe phenomena that change over time. One important question involves identification of the exact point in time when the observed phenomena begin to meaningfully change above and beyond baseline…
Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control
Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.
1997-01-01
One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.
Ballari, Rajashekhar V; Martin, Asha; Gowda, Lalitha R
2013-01-01
Brinjal is an important vegetable crop. Major crop loss of brinjal is due to insect attack. Insect-resistant EE-1 brinjal has been developed and is awaiting approval for commercial release. Consumer health concerns and implementation of international labelling legislation demand reliable analytical detection methods for genetically modified (GM) varieties. End-point and real-time polymerase chain reaction (PCR) methods were used to detect EE-1 brinjal. In end-point PCR, primer pairs specific to 35S CaMV promoter, NOS terminator and nptII gene common to other GM crops were used. Based on the revealed 3' transgene integration sequence, primers specific for the event EE-1 brinjal were designed. These primers were used for end-point single, multiplex and SYBR-based real-time PCR. End-point single PCR showed that the designed primers were highly specific to event EE-1 with a sensitivity of 20 pg of genomic DNA, corresponding to 20 copies of haploid EE-1 brinjal genomic DNA. The limits of detection and quantification for SYBR-based real-time PCR assay were 10 and 100 copies respectively. The prior development of detection methods for this important vegetable crop will facilitate compliance with any forthcoming labelling regulations. Copyright © 2012 Society of Chemical Industry.
Truccolo, Wilson
2017-01-01
This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics (“order parameters”) inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. PMID:28336305
Truccolo, Wilson
2016-11-01
This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Klaas, Dua K. S. Y.; Imteaz, Monzur Alam
2017-09-01
A robust configuration of pilot points in the parameterisation step of a model is crucial to accurately obtain a satisfactory model performance. However, the recommendations provided by the majority of recent researchers on pilot-point use are considered somewhat impractical. In this study, a practical approach is proposed for using pilot-point properties (i.e. number, distance and distribution method) in the calibration step of a groundwater model. For the first time, the relative distance-area ratio ( d/ A) and head-zonation-based (HZB) method are introduced, to assign pilot points into the model domain by incorporating a user-friendly zone ratio. This study provides some insights into the trade-off between maximising and restricting the number of pilot points, and offers a relative basis for selecting the pilot-point properties and distribution method in the development of a physically based groundwater model. The grid-based (GB) method is found to perform comparably better than the HZB method in terms of model performance and computational time. When using the GB method, this study recommends a distance-area ratio of 0.05, a distance-x-grid length ratio ( d/ X grid) of 0.10, and a distance-y-grid length ratio ( d/ Y grid) of 0.20.
Estimating the number of people in crowded scenes
NASA Astrophysics Data System (ADS)
Kim, Minjin; Kim, Wonjun; Kim, Changick
2011-01-01
This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.
2011-01-01
Background The Prospective Space-Time scan statistic (PST) is widely used for the evaluation of space-time clusters of point event data. Usually a window of cylindrical shape is employed, with a circular or elliptical base in the space domain. Recently, the concept of Minimum Spanning Tree (MST) was applied to specify the set of potential clusters, through the Density-Equalizing Euclidean MST (DEEMST) method, for the detection of arbitrarily shaped clusters. The original map is cartogram transformed, such that the control points are spread uniformly. That method is quite effective, but the cartogram construction is computationally expensive and complicated. Results A fast method for the detection and inference of point data set space-time disease clusters is presented, the Voronoi Based Scan (VBScan). A Voronoi diagram is built for points representing population individuals (cases and controls). The number of Voronoi cells boundaries intercepted by the line segment joining two cases points defines the Voronoi distance between those points. That distance is used to approximate the density of the heterogeneous population and build the Voronoi distance MST linking the cases. The successive removal of edges from the Voronoi distance MST generates sub-trees which are the potential space-time clusters. Finally, those clusters are evaluated through the scan statistic. Monte Carlo replications of the original data are used to evaluate the significance of the clusters. An application for dengue fever in a small Brazilian city is presented. Conclusions The ability to promptly detect space-time clusters of disease outbreaks, when the number of individuals is large, was shown to be feasible, due to the reduced computational load of VBScan. Instead of changing the map, VBScan modifies the metric used to define the distance between cases, without requiring the cartogram construction. Numerical simulations showed that VBScan has higher power of detection, sensitivity and positive predicted value than the Elliptic PST. Furthermore, as VBScan also incorporates topological information from the point neighborhood structure, in addition to the usual geometric information, it is more robust than purely geometric methods such as the elliptic scan. Those advantages were illustrated in a real setting for dengue fever space-time clusters. PMID:21513556
Determining postural stability
NASA Technical Reports Server (NTRS)
Forth, Katharine E. (Inventor); Paloski, William H. (Inventor); Lieberman, Erez (Inventor)
2011-01-01
A method for determining postural stability of a person can include acquiring a plurality of pressure data points over a period of time from at least one pressure sensor. The method can also include the step of identifying a postural state for each pressure data point to generate a plurality of postural states. The method can include the step of determining a postural state of the person at a point in time based on at least the plurality of postural states.
Smoke-Point Properties of Non-Buoyant Round Laminar Jet Diffusion Flames. Appendix J
NASA Technical Reports Server (NTRS)
Urban, D. L.; Yuan, Z.-G.; Sunderland, P. B.; Lin, K.-C.; Dai, Z.; Faeth, G. M.
2000-01-01
The laminar smoke-point properties of non-buoyant round laminar jet diffusion flames were studied emphasizing results from long-duration (100-230 s) experiments at microgravity carried out in orbit aboard the space shuttle Columbia. Experimental conditions included ethylene- and propane-fueled flames burning in still air at an ambient temperature of 300 K, pressures of 35-130 kPa, jet exit diameters of 1.6 and 2.7 mm, jet exit velocities of 170-690 mm/s, jet exit Reynolds numbers of 46-172, characteristic flame residence times of 40-302 ms, and luminous flame lengths of 15-63 mm. Contrary to the normal-gravity laminar smoke point, in microgravity, the onset of laminar smoke-point conditions involved two flame configurations: closed-tip flames with soot emissions along the flame axis and open-tip flames with soot emissions from an annular ring about the flame axis. Open-tip flames were observed at large characteristic flame residence times with the onset of soot emissions associated with radiative quenching near the flame tip: nevertheless, unified correlations of laminar smoke-point properties were obtained that included both flame configurations. Flame lengths at laminar smoke-point conditions were well correlated in terms of a corrected fuel flow rate suggested by a simplified analysis of flame shape. The present steady and non-buoyant flames emitted soot more readily than non-buoyant flames in earlier tests using ground-based microgravity facilities and than buoyant flames at normal gravity, as a result of reduced effects of unsteadiness, flame disturbances, and buoyant motion. For example, present measurements of laminar smoke-point flame lengths at comparable conditions were up to 2.3 times shorter than ground-based microgravity measurements and up to 6.4 times shorter than buoyant flame measurements. Finally, present laminar smoke-point flame lengths were roughly inversely proportional to pressure to a degree that is a somewhat smaller than observed during earlier tests both at microgravity (using ground-based facilities) and at normal gravity.
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
2013-03-21
and timers use a time-based estimate to predict how many people are in a facility at a given point in the day. CO2-based DCV systems measure CO2...energy and latent energy from the outside air when the coils’ surface temperature is below the dew point of the air passing over the coils (ASHRAE...model assumes that the dew point water saturation pressure is the same as the dry-bulb water vapor pressure, consistent with a typical ASHRAE
High speed FPGA-based Phasemeter for the far-infrared laser interferometers on EAST
NASA Astrophysics Data System (ADS)
Yao, Y.; Liu, H.; Zou, Z.; Li, W.; Lian, H.; Jie, Y.
2017-12-01
The far-infrared laser-based HCN interferometer and POlarimeter/INTerferometer\\break (POINT) system are important diagnostics for plasma density measurement on EAST tokamak. Both HCN and POINT provide high spatial and temporal resolution of electron density measurement and used for plasma density feedback control. The density is calculated by measuring the real-time phase difference between the reference beams and the probe beams. For long-pulse operations on EAST, the calculation of density has to meet the requirements of Real-Time and high precision. In this paper, a Phasemeter for far-infrared laser-based interferometers will be introduced. The FPGA-based Phasemeter leverages fast ADCs to obtain the three-frequency signals from VDI planar-diode Mixers, and realizes digital filters and an FFT algorithm in FPGA to provide real-time, high precision electron density output. Implementation of the Phasemeter will be helpful for the future plasma real-time feedback control in long-pulse discharge.
Progress on the CWU READI Analysis Center
NASA Astrophysics Data System (ADS)
Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C.
2015-12-01
Real-time GPS position streams are desirable for a variety of seismic monitoring and hazard mitigation applications. We report on progress in our development of a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone. This system is based on 1 Hz point position estimates computed in the ITRF08 reference frame. Convergence from phase and range observables to point position estimates is accelerated using a Kalman filter based, on-line stream editor that produces independent estimations of carrier phase integer biases and other parameters. Positions are then estimated using a short-arc approach and algorithms from JPL's GIPSY-OASIS software with satellite clock and orbit products from the International GNSS Service (IGS). The resulting positions show typical RMS scatter of 2.5 cm in the horizontal and 5 cm in the vertical with latencies below 2 seconds. To facilitate the use of these point position streams for applications such as seismic monitoring, we broadcast real-time positions and covariances using custom-built aggregation-distribution software based on RabbitMQ messaging platform. This software is capable of buffering 24-hour streams for hundreds of stations and providing them through a REST-ful web interface. To demonstrate the power of this approach, we have developed a Java-based front-end that provides a real-time visual display of time-series, displacement vector fields, and map-view, contoured, peak ground displacement. This Java-based front-end is available for download through the PANGA website. We are currently analyzing 80 PBO and PANGA stations along the Cascadia margin and gearing up to process all 400+ real-time stations that are operating in the Pacific Northwest, many of which are currently telemetered in real-time to CWU. These will serve as milestones towards our over-arching goal of extending our processing to include all of the available real-time streams from the Pacific rim. In addition, we have developed a Kalman filter to combine CWU real-time PPP solutions with those from Scripps Institute of Oceanography's PPP-AR real-time solutions as well as real-time solutions from the USGS. These combined products should improve the robustness and reliability of real-time point-position streams in the near future.
Estimation of Initial and Response Times of Laser Dew-Point Hygrometer by Measurement Simulation
NASA Astrophysics Data System (ADS)
Matsumoto, Sigeaki; Toyooka, Satoru
1995-10-01
The initial and the response times of the laser dew-point hygrometer were evaluated by measurement simulation. The simulation was based on loop computations of the surface temperature of a plate with dew deposition, the quantity of dew deposited and the intensity of scattered light from the surface at each short interval of measurement. The initial time was defined as the time necessary for the hygrometer to reach a temperature within ±0.5° C of the measured dew point from the start time of measurement, and the response time was also defined for stepwise dew-point changes of +5° C and -5° C. The simulation results are in approximate agreement with the recorded temperature and intensity of scattered light of the hygrometer. The evaluated initial time ranged from 0.3 min to 5 min in the temperature range from 0° C to 60° C, and the response time was also evaluated to be from 0.2 min to 3 min.
Rajtmajer, Sarah M; Roy, Arnab; Albert, Reka; Molenaar, Peter C M; Hillary, Frank G
2015-01-01
Despite exciting advances in the functional imaging of the brain, it remains a challenge to define regions of interest (ROIs) that do not require investigator supervision and permit examination of change in networks over time (or plasticity). Plasticity is most readily examined by maintaining ROIs constant via seed-based and anatomical-atlas based techniques, but these approaches are not data-driven, requiring definition based on prior experience (e.g., choice of seed-region, anatomical landmarks). These approaches are limiting especially when functional connectivity may evolve over time in areas that are finer than known anatomical landmarks or in areas outside predetermined seeded regions. An ideal method would permit investigators to study network plasticity due to learning, maturation effects, or clinical recovery via multiple time point data that can be compared to one another in the same ROI while also preserving the voxel-level data in those ROIs at each time point. Data-driven approaches (e.g., whole-brain voxelwise approaches) ameliorate concerns regarding investigator bias, but the fundamental problem of comparing the results between distinct data sets remains. In this paper we propose an approach, aggregate-initialized label propagation (AILP), which allows for data at separate time points to be compared for examining developmental processes resulting in network change (plasticity). To do so, we use a whole-brain modularity approach to parcellate the brain into anatomically constrained functional modules at separate time points and then apply the AILP algorithm to form a consensus set of ROIs for examining change over time. To demonstrate its utility, we make use of a known dataset of individuals with traumatic brain injury sampled at two time points during the first year of recovery and show how the AILP procedure can be applied to select regions of interest to be used in a graph theoretical analysis of plasticity.
An approach of point cloud denoising based on improved bilateral filtering
NASA Astrophysics Data System (ADS)
Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin
2018-04-01
An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.
Zaylaa, Amira; Charara, Jamal; Girault, Jean-Marc
2015-08-01
The analysis of biomedical signals demonstrating complexity through recurrence plots is challenging. Quantification of recurrences is often biased by sojourn points that hide dynamic transitions. To overcome this problem, time series have previously been embedded at high dimensions. However, no one has quantified the elimination of sojourn points and rate of detection, nor the enhancement of transition detection has been investigated. This paper reports our on-going efforts to improve the detection of dynamic transitions from logistic maps and fetal hearts by reducing sojourn points. Three signal-based recurrence plots were developed, i.e. embedded with specific settings, derivative-based and m-time pattern. Determinism, cross-determinism and percentage of reduced sojourn points were computed to detect transitions. For logistic maps, an increase of 50% and 34.3% in sensitivity of detection over alternatives was achieved by m-time pattern and embedded recurrence plots with specific settings, respectively, and with a 100% specificity. For fetal heart rates, embedded recurrence plots with specific settings provided the best performance, followed by derivative-based recurrence plot, then unembedded recurrence plot using the determinism parameter. The relative errors between healthy and distressed fetuses were 153%, 95% and 91%. More than 50% of sojourn points were eliminated, allowing better detection of heart transitions triggered by gaseous exchange factors. This could be significant in improving the diagnosis of fetal state. Copyright © 2014 Elsevier Ltd. All rights reserved.
Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.
2016-02-02
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less
Imaging study on acupuncture points
NASA Astrophysics Data System (ADS)
Yan, X. H.; Zhang, X. Y.; Liu, C. L.; Dang, R. S.; Ando, M.; Sugiyama, H.; Chen, H. S.; Ding, G. H.
2009-09-01
The topographic structures of acupuncture points were investigated by using the synchrotron radiation based Dark Field Image (DFI) method. Four following acupuncture points were studied: Sanyinjiao, Neiguan, Zusanli and Tianshu. We have found that at acupuncture point regions there exists the accumulation of micro-vessels. The images taken in the surrounding tissue out of the acupuncture points do not show such kind of structure. It is the first time to reveal directly the specific structure of acupuncture points by X-ray imaging.
Baltierra, Nina B; Muessig, Kathryn E; Pike, Emily C; LeGrand, Sara; Bull, Sheana S; Hightow-Weidman, Lisa B
2016-02-01
There has been a rise in internet-based health interventions without a concomitant focus on new methods to measure user engagement and its effect on outcomes. We describe current user tracking methods for internet-based health interventions and offer suggestions for improvement based on the design and pilot testing of healthMpowerment.org (HMP). HMP is a multi-component online intervention for young Black men and transgender women who have sex with men (YBMSM/TW) to reduce risky sexual behaviors, promote healthy living and build social support. The intervention is non-directive, incorporates interactive features, and utilizes a point-based reward system. Fifteen YBMSM/TW (age 20-30) participated in a one-month pilot study to test the usability and efficacy of HMP. Engagement with the intervention was tracked using a customized data capture system and validated with Google Analytics. Usage was measured in time spent (total and across sections) and points earned. Average total time spent on HMP was five hours per person (range 0-13). Total time spent was correlated with total points earned and overall site satisfaction. Measuring engagement in internet-based interventions is crucial to determining efficacy. Multiple methods of tracking helped derive more comprehensive user profiles. Results highlighted the limitations of measures to capture user activity and the elusiveness of the concept of engagement. Copyright © 2016 Elsevier Inc. All rights reserved.
Thermophysical Properties of Selected Rocks.
1974-04-01
the region below the melting point . Selected values are for Dresser basalt based on the data of Navarro and DeWitt [861 and of Marovelli and Veith [51...TO AT = T2 - T 1, q Is the rate of heat flow, A is the cross-sectional area of the specimen, and Ax is the distance between points of temperature...heater provides a constant heat, q, per unit time and length, and the temperature at a point in the spec- imen is recorded as a function of time. The
van Maanen, Leendert; de Jong, Ritske; van Rijn, Hedderik
2014-01-01
When multiple strategies can be used to solve a type of problem, the observed response time distributions are often mixtures of multiple underlying base distributions each representing one of these strategies. For the case of two possible strategies, the observed response time distributions obey the fixed-point property. That is, there exists one reaction time that has the same probability of being observed irrespective of the actual mixture proportion of each strategy. In this paper we discuss how to compute this fixed-point, and how to statistically assess the probability that indeed the observed response times are generated by two competing strategies. Accompanying this paper is a free R package that can be used to compute and test the presence or absence of the fixed-point property in response time data, allowing for easy to use tests of strategic behavior. PMID:25170893
Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.
Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.
Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface
Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321
Real-time implementation of camera positioning algorithm based on FPGA & SOPC
NASA Astrophysics Data System (ADS)
Yang, Mingcao; Qiu, Yuehong
2014-09-01
In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.
A path-integral approach to the problem of time
NASA Astrophysics Data System (ADS)
Amaral, M. M.; Bojowald, Martin
2018-01-01
Quantum transition amplitudes are formulated for model systems with local internal time, using intuition from path integrals. The amplitudes are shown to be more regular near a turning point of internal time than could be expected based on existing canonical treatments. In particular, a successful transition through a turning point is provided in the model systems, together with a new definition of such a transition in general terms. Some of the results rely on a fruitful relation between the problem of time and general Gribov problems.
Actuator-Assisted Calibration of Freehand 3D Ultrasound System.
Koo, Terry K; Silvia, Nathaniel
2018-01-01
Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified "collinear point target" phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration.
Actuator-Assisted Calibration of Freehand 3D Ultrasound System
2018-01-01
Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified “collinear point target” phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration. PMID:29854371
Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation
NASA Astrophysics Data System (ADS)
An, Lu; Guo, Baolong
2018-03-01
Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).
Progress in using real-time GPS for seismic monitoring of the Cascadia megathrust
NASA Astrophysics Data System (ADS)
Szeliga, W. M.; Melbourne, T. I.; Santillan, V. M.; Scrivner, C.; Webb, F.
2014-12-01
We report on progress in our development of a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone. This system is based on 1 Hz point position estimates computed in the ITRF08 reference frame. Convergence from phase and range observables to point position estimates is accelerated using a Kalman filter based, on-line stream editor. Positions are estimated using a short-arc approach and algorithms from JPL's GIPSY-OASIS software with satellite clock and orbit products from the International GNSS Service (IGS). The resulting positions show typical RMS scatter of 2.5 cm in the horizontal and 5 cm in the vertical with latencies below 2 seconds. To facilitate the use of these point position streams for applications such as seismic monitoring, we broadcast real-time positions and covariances using custom-built streaming software. This software is capable of buffering 24-hour streams for hundreds of stations and providing them through a REST-ful web interface. To demonstrate the power of this approach, we have developed a Java-based front-end that provides a real-time visual display of time-series, vector displacement, and contoured peak ground displacement. We have also implemented continuous estimation of finite fault slip along the Cascadia megathrust using an NIF approach. The resulting continuous slip distributions are combined with pre-computed tsunami Green's functions to generate real-time tsunami run-up estimates for the entire Cascadia coastal margin. This Java-based front-end is available for download through the PANGA website. We currently analyze 80 PBO and PANGA stations along the Cascadia margin and are gearing up to process all 400+ real-time stations operating in the Pacific Northwest, many of which are currently telemetered in real-time to CWU. These will serve as milestones towards our over-arching goal of extending our processing to include all of the available real-time streams from the Pacific rim. In addition, we are developing methodologies to combine our real-time solutions with those from Scripps Institute of Oceanography's PPP-AR real-time solutions as well as real-time solutions from the USGS. These combined products should improve the robustness and reliability of real-time point-position streams in the near future.
Precise Point Positioning technique for short and long baselines time transfer
NASA Astrophysics Data System (ADS)
Lejba, Pawel; Nawrocki, Jerzy; Lemanski, Dariusz; Foks-Ryznar, Anna; Nogas, Pawel; Dunst, Piotr
2013-04-01
In this work the clock parameters determination of several timing receivers TTS-4 (AOS), ASHTECH Z-XII3T (OP, ORB, PTB, USNO) and SEPTENTRIO POLARX4TR (ORB, since February 11, 2012) by use of the Precise Point Positioning (PPP) technique were presented. The clock parameters were determined for several time links based on the data delivered by time and frequency laboratories mentioned above. The computations cover the period from January 1 to December 31, 2012 and were performed in two modes with 7-day and one-month solution for all links. All RINEX data files which include phase and code GPS data were recorded in 30-second intervals. All calculations were performed by means of Natural Resource Canada's GPS Precise Point Positioning (GPS-PPP) software based on high-quality precise satellite coordinates and satellite clock delivered by IGS as the final products. The used independent PPP technique is a very powerful and simple method which allows for better control of antenna positions in AOS and a verification of other time transfer techniques like GPS CV, GLONASS CV and TWSTFT. The PPP technique is also a very good alternative for calibration of a glass fiber link PL-AOS realized at present by AOS. Currently PPP technique is one of the main time transfer methods used at AOS what considerably improve and strengthen the quality of the Polish time scales UTC(AOS), UTC(PL), and TA(PL). KEY-WORDS: Precise Point Positioning, time transfer, IGS products, GNSS, time scales.
Detecting multiple moving objects in crowded environments with coherent motion regions
Cheriyadat, Anil M.; Radke, Richard J.
2013-06-11
Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.
Meng, Yu; Li, Gang; Gao, Yaozong; Lin, Weili; Shen, Dinggang
2016-11-01
Longitudinal neuroimaging analysis of the dynamic brain development in infants has received increasing attention recently. Many studies expect a complete longitudinal dataset in order to accurately chart the brain developmental trajectories. However, in practice, a large portion of subjects in longitudinal studies often have missing data at certain time points, due to various reasons such as the absence of scan or poor image quality. To make better use of these incomplete longitudinal data, in this paper, we propose a novel machine learning-based method to estimate the subject-specific, vertex-wise cortical morphological attributes at the missing time points in longitudinal infant studies. Specifically, we develop a customized regression forest, named dynamically assembled regression forest (DARF), as the core regression tool. DARF ensures the spatial smoothness of the estimated maps for vertex-wise cortical morphological attributes and also greatly reduces the computational cost. By employing a pairwise estimation followed by a joint refinement, our method is able to fully exploit the available information from both subjects with complete scans and subjects with missing scans for estimation of the missing cortical attribute maps. The proposed method has been applied to estimating the dynamic cortical thickness maps at missing time points in an incomplete longitudinal infant dataset, which includes 31 healthy infant subjects, each having up to five time points in the first postnatal year. The experimental results indicate that our proposed framework can accurately estimate the subject-specific vertex-wise cortical thickness maps at missing time points, with the average error less than 0.23 mm. Hum Brain Mapp 37:4129-4147, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DORIS-based point mascons for the long term stability of precise orbit solutions
NASA Astrophysics Data System (ADS)
Cerri, L.; Lemoine, J. M.; Mercier, F.; Zelensky, N. P.; Lemoine, F. G.
2013-08-01
In recent years non-tidal Time Varying Gravity (TVG) has emerged as the most important contributor in the error budget of Precision Orbit Determination (POD) solutions for altimeter satellites' orbits. The Gravity Recovery And Climate Experiment (GRACE) mission has provided POD analysts with static and time-varying gravity models that are very accurate over the 2002-2012 time interval, but whose linear rates cannot be safely extrapolated before and after the GRACE lifespan. One such model based on a combination of data from GRACE and Lageos from 2002-2010, is used in the dynamic POD solutions developed for the Geophysical Data Records (GDRs) of the Jason series of altimeter missions and the equivalent products from lower altitude missions such as Envisat, Cryosat-2, and HY-2A. In order to accommodate long-term time-variable gravity variations not included in the background geopotential model, we assess the feasibility of using DORIS data to observe local mass variations using point mascons. In particular, we show that the point-mascon approach can stabilize the geographically correlated orbit errors which are of fundamental interest for the analysis of regional Mean Sea Level trends based on altimeter data, and can therefore provide an interim solution in the event of GRACE data loss. The time series of point-mass solutions for Greenland and Antarctica show good agreement with independent series derived from GRACE data, indicating a mass loss at rate of 210 Gt/year and 110 Gt/year respectively.
Registration of 4D time-series of cardiac images with multichannel Diffeomorphic Demons.
Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Pennec, Xavier; Xu, Chenyang; Ayache, Nicholas
2008-01-01
In this paper, we propose a generic framework for intersubject non-linear registration of 4D time-series images. In this framework, spatio-temporal registration is defined by mapping trajectories of physical points as opposed to spatial registration that solely aims at mapping homologous points. First, we determine the trajectories we want to register in each sequence using a motion tracking algorithm based on the Diffeomorphic Demons algorithm. Then, we perform simultaneously pairwise registrations of corresponding time-points with the constraint to map the same physical points over time. We show this trajectory registration can be formulated as a multichannel registration of 3D images. We solve it using the Diffeomorphic Demons algorithm extended to vector-valued 3D images. This framework is applied to the inter-subject non-linear registration of 4D cardiac CT sequences.
Comparison of tablet-based strategies for incision planning in laser microsurgery
NASA Astrophysics Data System (ADS)
Schoob, Andreas; Lekon, Stefan; Kundrat, Dennis; Kahrs, Lüder A.; Mattos, Leonardo S.; Ortmaier, Tobias
2015-03-01
Recent research has revealed that incision planning in laser surgery deploying stylus and tablet outperforms state-of-the-art micro-manipulator-based laser control. Providing more detailed quantitation regarding that approach, a comparative study of six tablet-based strategies for laser path planning is presented. Reference strategy is defined by monoscopic visualization and continuous path drawing on a graphics tablet. Further concepts deploying stereoscopic or a synthesized laser view, point-based path definition, real-time teleoperation or a pen display are compared with the reference scenario. Volunteers were asked to redraw and ablate stamped lines on a sample. Performance is assessed by measuring planning accuracy, completion time and ease of use. Results demonstrate that significant differences exist between proposed concepts. The reference strategy provides more accurate incision planning than the stereo or laser view scenario. Real-time teleoperation performs best with respect to completion time without indicating any significant deviation in accuracy and usability. Point-based planning as well as the pen display provide most accurate planning and increased ease of use compared to the reference strategy. As a result, combining the pen display approach with point-based planning has potential to become a powerful strategy because of benefiting from improved hand-eye-coordination on the one hand and from a simple but accurate technique for path definition on the other hand. These findings as well as the overall usability scale indicating high acceptance and consistence of proposed strategies motivate further advanced tablet-based planning in laser microsurgery.
1984-06-01
bifurcate-base points, Kirk points, Plano points). During this time period temperate species were expanding their ranges northward and eastward. Early...textured. A second type of ceramics is smooth-bodled or incised , fiber-or-steatite tempered, manufactured by modeling, has flat bottoms and shows
Dynamic analysis of suspension cable based on vector form intrinsic finite element method
NASA Astrophysics Data System (ADS)
Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun
2017-10-01
A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.
NASA Astrophysics Data System (ADS)
Qiu, Mo; Yu, Simin; Wen, Yuqiong; Lü, Jinhu; He, Jianbin; Lin, Zhuosheng
In this paper, a novel design methodology and its FPGA hardware implementation for a universal chaotic signal generator is proposed via the Verilog HDL fixed-point algorithm and state machine control. According to continuous-time or discrete-time chaotic equations, a Verilog HDL fixed-point algorithm and its corresponding digital system are first designed. In the FPGA hardware platform, each operation step of Verilog HDL fixed-point algorithm is then controlled by a state machine. The generality of this method is that, for any given chaotic equation, it can be decomposed into four basic operation procedures, i.e. nonlinear function calculation, iterative sequence operation, iterative values right shifting and ceiling, and chaotic iterative sequences output, each of which corresponds to only a state via state machine control. Compared with the Verilog HDL floating-point algorithm, the Verilog HDL fixed-point algorithm can save the FPGA hardware resources and improve the operation efficiency. FPGA-based hardware experimental results validate the feasibility and reliability of the proposed approach.
a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree
NASA Astrophysics Data System (ADS)
Kang, Q.; Huang, G.; Yang, S.
2018-04-01
Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty.
Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang
2015-01-01
Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method.
Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty
Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang
2015-01-01
Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method. PMID:26417946
A Point Rainfall Generator With Internal Storm Structure
NASA Astrophysics Data System (ADS)
Marien, J. L.; Vandewiele, G. L.
1986-04-01
A point rainfall generator is a probabilistic model for the time series of rainfall as observed in one geographical point. The main purpose of such a model is to generate long synthetic sequences of rainfall for simulation studies. The present generator is a continuous time model based on 13.5 years of 10-min point rainfalls observed in Belgium and digitized with a resolution of 0.1 mm. The present generator attempts to model all features of the rainfall time series which are important for flood studies as accurately as possible. The original aspects of the model are on the one hand the way in which storms are defined and on the other hand the theoretical model for the internal storm characteristics. The storm definition has the advantage that the important characteristics of successive storms are fully independent and very precisely modelled, even on time bases as small as 10 min. The model of the internal storm characteristics has a strong theoretical structure. This fact justifies better the extrapolation of this model to severe storms for which the data are very sparse. This can be important when using the model to simulate severe flood events.
Smoke-Point Properties of Nonbuoyant Round Laminar Jet Diffusion Flames
NASA Technical Reports Server (NTRS)
Urban, D. L.; Yuan, Z.-G.; Sunderland, R. B.; Lin, K.-C.; Dai, Z.; Faeth, G. M.
2000-01-01
The laminar smoke-point properties of nonbuoyant round laminar jet diffusion flames were studied emphasizing results from long duration (100-230 s) experiments at microgravity carried -out on- orbit in the Space Shuttle Columbia. Experimental conditions included ethylene-and propane-fueled flames burning in still air at an ambient temperature of 300 K, initial jet exit diameters of 1.6 and 2.7 mm, jet exit velocities of 170-1630 mm/s, jet exit Reynolds numbers of 46-172, characteristic flame residence times of 40-302 ms, and luminous flame lengths of 15-63 mm. The onset of laminar smoke-point conditions involved two flame configurations: closed-tip flames with first soot emissions along the flame axis and open-tip flames with first soot emissions from an annular ring about the flame axis. Open-tip flames were observed at large characteristic flame residence times with the onset of soot emissions associated with radiative quenching near the flame tip; nevertheless, unified correlations of laminar smoke-point properties were obtained that included both flame configurations. Flame lengths at laminar smoke-point conditions were well-correlated in terms of a corrected fuel flow rate suggested by a simplified analysis of flame shape. The present steady and nonbuoyant flames emitted soot more readily than earlier tests of nonbuoyant flames at microgravity using ground-based facilities and of buoyant flames at normal gravity due to reduced effects of unsteadiness, flame disturbances and buoyant motion. For example, laminar smoke-point flame lengths from ground-based microgravity measurements were up to 2.3 times longer and from buoyant flame measurements were up to 6.4 times longer than the present measurements at comparable conditions. Finally, present laminar smoke-point flame lengths were roughly inversely proportional to pressure, which is a somewhat slower variation than observed during earlier tests both at microgravity using ground-based facilities and at normal gravity.
Smoke-Point Properties of Nonbuoyant Round Laminar Jet Diffusion Flames. Appendix B
NASA Technical Reports Server (NTRS)
Urban, D. L.; Yuan, Z.-G.; Sunderland, P. B.; Lin, K.-C.; Dai, Z.; Faeth, G. M.; Ross, H. D. (Technical Monitor)
2000-01-01
The laminar smoke-point properties of non-buoyant round laminar jet diffusion flames were studied emphasizing results from long-duration (100-230 s) experiments at microgravity carried out in orbit aboard the space shuttle Columbia. Experimental conditions included ethylene- and propane-fueled flames burning in still air at an ambient temperature of 300 K, pressures of 35-130 kPa, jet exit diameters of 1.6 and 2.7 mm, jet exit velocities of 170-690 mm/s, jet exit Reynolds numbers of 46-172, characteristic flame residence times of 40-302 ms, and luminous flame lengths of 15-63 mm. Contrary to the normal-gravity laminar smoke point, in microgravity the onset of laminar smoke-point conditions involved two flame configurations: closed-tip flames with soot emissions along the flame axis and open-tip flames with soot emissions from an annular ring about the flame axis. Open-tip flames were observed at large characteristic flame residence times with the onset of soot emissions associated with radiative quenching near the flame tip: nevertheless, unified correlations of laminar smoke-point properties were obtained that included both flame configurations. Flame lengths at laminar smoke-point conditions were well correlated in terms of a corrected fuel flow rate suggested by a simplified analysis of flame shape. The present steady and nonbuoyant flames emitted soot more readily than non-buoyant flames in earlier tests using ground-based microgravity facilities and than buoyant flames at normal gravity, as a result of reduced effects of unsteadiness, flame disturbances, and buoyant motion. For example, present measurements of laminar smokepoint flame lengths at comparable conditions were up to 2.3 times shorter than ground-based microgravity measurements and up to 6.4 times shorter than buoyant flame measurements. Finally, present laminar smoke-point flame lengths were roughly inversely proportional to pressure to a degree that is a somewhat smaller than observed during earlier tests both at microgravity (using ground-based facilities) and at normal gravity,
Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang
2018-04-05
We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.
Variance change point detection for fractional Brownian motion based on the likelihood ratio test
NASA Astrophysics Data System (ADS)
Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz
2018-01-01
Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.
NASA Astrophysics Data System (ADS)
Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing
Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.
McCormack, Gavin R; Giles-Corti, Billie; Timperio, Anna; Wood, Georgina; Villanueva, Karen
2011-04-12
Children who participate in regular physical activity obtain health benefits. Preliminary pedometer-based cut-points representing sufficient levels of physical activity among youth have been established; however limited evidence regarding correlates of achieving these cut-points exists. The purpose of this study was to identify correlates of pedometer-based cut-points among elementary school-aged children. A cross-section of children in grades 5-7 (10-12 years of age) were randomly selected from the most (n = 13) and least (n = 12) 'walkable' public elementary schools (Perth, Western Australia), stratified by socioeconomic status. Children (n = 1480; response rate = 56.6%) and parents (n = 1332; response rate = 88.8%) completed a survey, and steps were collected from children using pedometers. Pedometer data were categorized to reflect the sex-specific pedometer-based cut-points of ≥15000 steps/day for boys and ≥12000 steps/day for girls. Associations between socio-demographic characteristics, sedentary and active leisure-time behavior, independent mobility, active transportation and built environmental variables - collected from the child and parent surveys - and meeting pedometer-based cut-points were estimated (odds ratios: OR) using generalized estimating equations. Overall 927 children participated in all components of the study and provided complete data. On average, children took 11407 ± 3136 steps/day (boys: 12270 ± 3350 vs. girls: 10681 ± 2745 steps/day; p < 0.001) and 25.9% (boys: 19.1 vs. girls: 31.6%; p < 0.001) achieved the pedometer-based cut-points.After adjusting for all other variables and school clustering, meeting the pedometer-based cut-points was negatively associated (p < 0.05) with being male (OR = 0.42), parent self-reported number of different destinations in the neighborhood (OR 0.93), and a friend's (OR 0.62) or relative's (OR 0.44, boys only) house being at least a 10-minute walk from home. Achieving the pedometer-based cut-points was positively associated with participating in screen-time < 2 hours/day (OR 1.88), not being driven to school (OR 1.48), attending a school located in a high SES neighborhood (OR 1.33), the average number of steps among children within the respondent's grade (for each 500 step/day increase: OR 1.29), and living further than a 10-minute walk from a relative's house (OR 1.69, girls only). Comprehensive multi-level interventions that reduce screen-time, encourage active travel to/from school and foster a physically active classroom culture might encourage more physical activity among children.
2011-01-01
Background Children who participate in regular physical activity obtain health benefits. Preliminary pedometer-based cut-points representing sufficient levels of physical activity among youth have been established; however limited evidence regarding correlates of achieving these cut-points exists. The purpose of this study was to identify correlates of pedometer-based cut-points among elementary school-aged children. Method A cross-section of children in grades 5-7 (10-12 years of age) were randomly selected from the most (n = 13) and least (n = 12) 'walkable' public elementary schools (Perth, Western Australia), stratified by socioeconomic status. Children (n = 1480; response rate = 56.6%) and parents (n = 1332; response rate = 88.8%) completed a survey, and steps were collected from children using pedometers. Pedometer data were categorized to reflect the sex-specific pedometer-based cut-points of ≥15000 steps/day for boys and ≥12000 steps/day for girls. Associations between socio-demographic characteristics, sedentary and active leisure-time behavior, independent mobility, active transportation and built environmental variables - collected from the child and parent surveys - and meeting pedometer-based cut-points were estimated (odds ratios: OR) using generalized estimating equations. Results Overall 927 children participated in all components of the study and provided complete data. On average, children took 11407 ± 3136 steps/day (boys: 12270 ± 3350 vs. girls: 10681 ± 2745 steps/day; p < 0.001) and 25.9% (boys: 19.1 vs. girls: 31.6%; p < 0.001) achieved the pedometer-based cut-points. After adjusting for all other variables and school clustering, meeting the pedometer-based cut-points was negatively associated (p < 0.05) with being male (OR = 0.42), parent self-reported number of different destinations in the neighborhood (OR 0.93), and a friend's (OR 0.62) or relative's (OR 0.44, boys only) house being at least a 10-minute walk from home. Achieving the pedometer-based cut-points was positively associated with participating in screen-time < 2 hours/day (OR 1.88), not being driven to school (OR 1.48), attending a school located in a high SES neighborhood (OR 1.33), the average number of steps among children within the respondent's grade (for each 500 step/day increase: OR 1.29), and living further than a 10-minute walk from a relative's house (OR 1.69, girls only). Conclusions Comprehensive multi-level interventions that reduce screen-time, encourage active travel to/from school and foster a physically active classroom culture might encourage more physical activity among children. PMID:21486475
Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif
2017-05-01
Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.
A shape-based segmentation method for mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen
2013-07-01
Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.
Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing
NASA Astrophysics Data System (ADS)
McCaffrey, Nathaniel J.; Pantuso, Francis P.
1998-03-01
A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.
Pulse Detonation Physiochemical and Exhaust Relaxation Processes
2006-05-01
based on total time to detonation and detonation percentage. Nomenclature A = Arrehenius Constant Ea = Activation Energy Ecrit = Critical...the precision uncertainties vary for each data point. Therefore, the total experimental uncertainty will vary by data point. A comprehensive bias
Turning Points during the Life of Student Project Teams: A Qualitative Study
ERIC Educational Resources Information Center
Raes, Elisabeth; Kyndt, Eva; Dochy, Filip
2015-01-01
In this qualitative study a more flexible alternative of conceptualising changes over time in teams is tested within student project teams. The conceptualisation uses turning points during the lifespan of a team to outline team development, based on work by Erbert, Mearns, & Dena (2005). Turning points are moments that made a significant…
Shi, Xiaoping; Wu, Yuehua; Rao, Calyampudi Radhakrishna
2018-06-05
The change-point detection has been carried out in terms of the Euclidean minimum spanning tree (MST) and shortest Hamiltonian path (SHP), with successful applications in the determination of authorship of a classic novel, the detection of change in a network over time, the detection of cell divisions, etc. However, these Euclidean graph-based tests may fail if a dataset contains random interferences. To solve this problem, we present a powerful non-Euclidean SHP-based test, which is consistent and distribution-free. The simulation shows that the test is more powerful than both Euclidean MST- and SHP-based tests and the non-Euclidean MST-based test. Its applicability in detecting both landing and departure times in video data of bees' flower visits is illustrated.
Karimzadeh, Iman; Khalili, Hossein
2016-06-06
Serum cystatin C (Cys C) has a number of advantages over serum creatinine in the evaluation of kidney function. Apart from Cys C level itself, several formulas have also been introduced in different clinical settings for the estimation of glomerular filtration rate (GFR) based upon serum Cys C level. The aim of the present study was to compare a serum Cys C-based equation with Cockcroft-Gault serum creatinine-based formula, both used in the calculation of GFR, in patients receiving amphotericin B. Fifty four adult patients with no history of acute or chronic kidney injury having been planned to receive conventional amphotericin B for an anticipated duration of at least 1 week for any indication were recruited. At three time points during amphotericin B treatment, including days 0, 7, and 14, serum cystatin C as well as creatinine levels were measured. GFR at the above time points was estimated by both creatinine (Cockcroft-Gault) and serum Cys C based equations. There was significant correlation between creatinine-based and Cys C-based GFR values at days 0 (R = 0.606, P = 0.001) and 7 (R = 0.714, P < 0.001). In contrast to GFR estimated by the Cockcroft-Gault equation, the mean (95 % confidence interval) Cys C-based GFR values at different studied time points were comparable within as well as between patients with and without amphotericin B nephrotoxicity. Our results suggested that the Gentian Cys C-based GFR equation correlated significantly with the Cockcroft-Gault formula at least at the early time period of treatment with amphotericin B. Graphical abstract Comparison between a serum creatinine-and a cystatin C-based glomerular filtration rate equation in patients receiving amphotericin B.
Capesius, Joseph P.; Arnold, L. Rick
2012-01-01
The Mass Balance results were quite variable over time such that they appeared suspect with respect to the concept of groundwater flow as being gradual and slow. The large degree of variability in the day-to-day and month-to-month Mass Balance results is likely the result of many factors. These factors could include ungaged stream inflows or outflows, short-term streamflow losses to and gains from temporary bank storage, and any lag in streamflow accounting owing to streamflow lag time of flow within a reach. The Pilot Point time series results were much less variable than the Mass Balance results and extreme values were effectively constrained. Less day-to-day variability, smaller magnitude extreme values, and smoother transitions in base-flow estimates provided by the Pilot Point method are more consistent with a conceptual model of groundwater flow being gradual and slow. The Pilot Point method provided a better fit to the conceptual model of groundwater flow and appeared to provide reasonable estimates of base flow.
He, Feng; Zeng, An-Ping
2006-01-01
Background The increasing availability of time-series expression data opens up new possibilities to study functional linkages of genes. Present methods used to infer functional linkages between genes from expression data are mainly based on a point-to-point comparison. Change trends between consecutive time points in time-series data have been so far not well explored. Results In this work we present a new method based on extracting main features of the change trend and level of gene expression between consecutive time points. The method, termed as trend correlation (TC), includes two major steps: 1, calculating a maximal local alignment of change trend score by dynamic programming and a change trend correlation coefficient between the maximal matched change levels of each gene pair; 2, inferring relationships of gene pairs based on two statistical extraction procedures. The new method considers time shifts and inverted relationships in a similar way as the local clustering (LC) method but the latter is merely based on a point-to-point comparison. The TC method is demonstrated with data from yeast cell cycle and compared with the LC method and the widely used Pearson correlation coefficient (PCC) based clustering method. The biological significance of the gene pairs is examined with several large-scale yeast databases. Although the TC method predicts an overall lower number of gene pairs than the other two methods at a same p-value threshold, the additional number of gene pairs inferred by the TC method is considerable: e.g. 20.5% compared with the LC method and 49.6% with the PCC method for a p-value threshold of 2.7E-3. Moreover, the percentage of the inferred gene pairs consistent with databases by our method is generally higher than the LC method and similar to the PCC method. A significant number of the gene pairs only inferred by the TC method are process-identity or function-similarity pairs or have well-documented biological interactions, including 443 known protein interactions and some known cell cycle related regulatory interactions. It should be emphasized that the overlapping of gene pairs detected by the three methods is normally not very high, indicating a necessity of combining the different methods in search of functional association of genes from time-series data. For a p-value threshold of 1E-5 the percentage of process-identity and function-similarity gene pairs among the shared part of the three methods reaches 60.2% and 55.6% respectively, building a good basis for further experimental and functional study. Furthermore, the combined use of methods is important to infer more complete regulatory circuits and network as exemplified in this study. Conclusion The TC method can significantly augment the current major methods to infer functional linkages and biological network and is well suitable for exploring temporal relationships of gene expression in time-series data. PMID:16478547
Armstrong, Rob
2017-03-24
The only fluralaner-related conclusion presented in a study comparing the efficacy of fluralaner and sarolaner for control of the tick Amblyomma americanum on dogs is based on study times that are outside the label administration recommendations. Label recommendations for fluralaner treatment of A. americanum on dogs in the USA require re-administration at 56 days. This 56 day re-administration was not conducted in the study; therefore, all assessed time points following 56 days post-treatment in the study present comparisons that are not consistent with fluralaner administration recommendations. The only comparative time point assessed prior to 56 days showing a difference between treatments was at 42 days post-administration, a time point when methodological problems were identified by the investigators. Therefore, the only comparative study conclusion that a difference was shown between fluralaner and sarolaner beyond 6 weeks (42 days) after treatment is not based on recommended product use. Furthermore, if the study does not show that there is a difference between the treatments at times when the products are used as recommended, then there also can be no comparative discussion of the risk of tick-borne pathogen transmission risk between treatments.
A Framework for Validating Traffic Simulation Models at the Vehicle Trajectory Level
DOT National Transportation Integrated Search
2017-03-01
Based on current practices, traffic simulation models are calibrated and validated using macroscopic measures such as 15-minute averages of traffic counts or average point-to-point travel times. For an emerging number of applications, including conne...
Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics
NASA Astrophysics Data System (ADS)
Kohira, K.; Masuda, H.
2017-09-01
A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.
Post-mortem chemical excitability of the iris should not be used for forensic death time diagnosis.
Koehler, Katja; Sehner, Susanne; Riemer, Martin; Gehl, Axel; Raupach, Tobias; Anders, Sven
2018-04-18
Post-mortem chemical excitability of the iris is one of the non-temperature-based methods in forensic diagnosis of the time since death. Although several authors reported on their findings, using different measurement methods, currently used time limits are based on a single dissertation which has recently been doubted to be applicable for forensic purpose. We investigated changes in pupil-iris ratio after application of acetylcholine (n = 79) or tropicamide (n = 58) and in controls at upper and lower time limits that are suggested in the current literature, using a digital photography-based measurement method with excellent reliability. We observed "positive," "negative," and "paradox" reactions in both intervention and control conditions at all investigated post-mortem time points, suggesting spontaneous changes in pupil size to be causative for the finding. According to our observations, post-mortem chemical excitability of the iris should not be used in forensic death time estimation, as results may cause false conclusions regarding the correct time point of death and might therefore be strongly misleading.
Data Processing and Quality Evaluation of a Boat-Based Mobile Laser Scanning System
Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri
2013-01-01
Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0–1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data. PMID:24048340
Data processing and quality evaluation of a boat-based mobile laser scanning system.
Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri
2013-09-17
Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0-1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data.
Patel, Nitesh V; Sundararajan, Sri; Keller, Irwin; Danish, Shabbar
2018-01-01
Objective: Magnetic resonance (MR)-guided stereotactic laser amygdalohippocampectomy is a minimally invasive procedure for the treatment of refractory epilepsy in patients with mesial temporal sclerosis. Limited data exist on post-ablation volumetric trends associated with the procedure. Methods: 10 patients with mesial temporal sclerosis underwent MR-guided stereotactic laser amygdalohippocampectomy. Three independent raters computed ablation volumes at the following time points: pre-ablation (PreA), immediate post-ablation (IPA), 24 hours post-ablation (24PA), first follow-up post-ablation (FPA), and greater than three months follow-up post-ablation (>3MPA), using OsiriX DICOM Viewer (Pixmeo, Bernex, Switzerland). Statistical trends in post-ablation volumes were determined for the time points. Results: MR-guided stereotactic laser amygdalohippocampectomy produces a rapid rise and distinct peak in post-ablation volume immediately following the procedure. IPA volumes are significantly higher than all other time points. Comparing individual time points within each raters dataset (intra-rater), a significant difference was seen between the IPA time point and all others. There was no statistical difference between the 24PA, FPA, and >3MPA time points. A correlation analysis demonstrated the strongest correlations at the 24PA (r=0.97), FPA (r=0.95), and 3MPA time points (r=0.99), with a weaker correlation at IPA (r=0.92). Conclusion: MR-guided stereotactic laser amygdalohippocampectomy produces a maximal increase in post-ablation volume immediately following the procedure, which decreases and stabilizes at 24 hours post-procedure and beyond three months follow-up. Based on the correlation analysis, the lower inter-rater reliability at the IPA time point suggests it may be less accurate to assess volume at this time point. We recommend post-ablation volume assessments be made at least 24 hours post-selective ablation of the amygdalohippocampal complex (SLAH).
Surgical Pathology Resident Rotation Restructuring at a Tertiary Care Academic Center.
Mehr, Chelsea R; Obstfeld, Amrom E; Barrett, Amanda C; Montone, Kathleen T; Schwartz, Lauren E
2017-01-01
Changes in the field of pathology and resident education necessitate ongoing evaluation of residency training. Evolutionary change is particularly important for surgical pathology rotations, which form the core of anatomic pathology training programs. In the past, we organized this rotation based on subjective insight. When faced with the recent need to restructure the rotation, we strove for a more evidence-based process. Our approach involved 2 primary sources of data. We quantified the number of cases and blocks submitted per case type to estimate workload and surveyed residents about the time required to gross specimens in all organ systems. A multidisciplinary committee including faculty, residents, and staff evaluated the results and used the data to model how various changes to the rotation would affect resident workload, turnaround time, and other variables. Finally, we identified rotation structures that equally distributed work and created a point-based system that capped grossing time for residents of different experience. Following implementation, we retrospectively compared turnaround time and duty hour violations before and after these changes and surveyed residents about their experiences with both systems. We evaluated the accuracy of the point-based system by examining grossing times and comparing them to the assigned point values. We found overall improvement in the rotation following the implementation. As there is essentially no literature on the subject of surgical pathology rotation organization, we hope that our experience will provide a road map to improve pathology resident education at other institutions.
Proof of Concept for the Trajectory-Level Validation Framework for Traffic Simulation Models
DOT National Transportation Integrated Search
2017-10-30
Based on current practices, traffic simulation models are calibrated and validated using macroscopic measures such as 15-minute averages of traffic counts or average point-to-point travel times. For an emerging number of applications, including conne...
Influencing Factors of the Initiation Point in the Parachute-Bomb Dynamic Detonation System
NASA Astrophysics Data System (ADS)
Qizhong, Li; Ye, Wang; Zhongqi, Wang; Chunhua, Bai
2017-12-01
The parachute system has been widely applied in modern armament design, especially for the fuel-air explosives. Because detonation of fuel-air explosives occurs during flight, it is necessary to investigate the influences of the initiation point to ensure successful dynamic detonation. In fact, the initiating position exist the falling area in the fuels, due to the error of influencing factors. In this paper, the major influencing factors of initiation point were explored with airdrop and the regularity between initiation point area and factors were obtained. Based on the regularity, the volume equation of initiation point area was established to predict the range of initiation point in the fuel. The analysis results showed that the initiation point appeared area, scattered on account of the error of attitude angle, secondary initiation charge velocity, and delay time. The attitude angle was the major influencing factors on a horizontal axis. On the contrary, secondary initiation charge velocity and delay time were the major influencing factors on a horizontal axis. Overall, the geometries of initiation point area were sector coupled with the errors of the attitude angle, secondary initiation charge velocity, and delay time.
NASA Astrophysics Data System (ADS)
Nassiri, Isar; Lombardo, Rosario; Lauria, Mario; Morine, Melissa J.; Moyseos, Petros; Varma, Vijayalakshmi; Nolen, Greg T.; Knox, Bridgett; Sloper, Daniel; Kaput, Jim; Priami, Corrado
2016-07-01
The investigation of the complex processes involved in cellular differentiation must be based on unbiased, high throughput data processing methods to identify relevant biological pathways. A number of bioinformatics tools are available that can generate lists of pathways ranked by statistical significance (i.e. by p-value), while ideally it would be desirable to functionally score the pathways relative to each other or to other interacting parts of the system or process. We describe a new computational method (Network Activity Score Finder - NASFinder) to identify tissue-specific, omics-determined sub-networks and the connections with their upstream regulator receptors to obtain a systems view of the differentiation of human adipocytes. Adipogenesis of human SBGS pre-adipocyte cells in vitro was monitored with a transcriptomic data set comprising six time points (0, 6, 48, 96, 192, 384 hours). To elucidate the mechanisms of adipogenesis, NASFinder was used to perform time-point analysis by comparing each time point against the control (0 h) and time-lapse analysis by comparing each time point with the previous one. NASFinder identified the coordinated activity of seemingly unrelated processes between each comparison, providing the first systems view of adipogenesis in culture. NASFinder has been implemented into a web-based, freely available resource associated with novel, easy to read visualization of omics data sets and network modules.
FPFH-based graph matching for 3D point cloud registration
NASA Astrophysics Data System (ADS)
Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua
2018-04-01
Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.
NASA Astrophysics Data System (ADS)
Cheng, Jun; Zhang, Jun; Tian, Jinwen
2015-12-01
Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.
Horn, W; Miksch, S; Egghart, G; Popow, C; Paky, F
1997-09-01
Real-time systems for monitoring and therapy planning, which receive their data from on-line monitoring equipment and computer-based patient records, require reliable data. Data validation has to utilize and combine a set of fast methods to detect, eliminate, and repair faulty data, which may lead to life-threatening conclusions. The strength of data validation results from the combination of numerical and knowledge-based methods applied to both continuously-assessed high-frequency data and discontinuously-assessed data. Dealing with high-frequency data, examining single measurements is not sufficient. It is essential to take into account the behavior of parameters over time. We present time-point-, time-interval-, and trend-based methods for validation and repair. These are complemented by time-independent methods for determining an overall reliability of measurements. The data validation benefits from the temporal data-abstraction process, which provides automatically derived qualitative values and patterns. The temporal abstraction is oriented on a context-sensitive and expectation-guided principle. Additional knowledge derived from domain experts forms an essential part for all of these methods. The methods are applied in the field of artificial ventilation of newborn infants. Examples from the real-time monitoring and therapy-planning system VIE-VENT illustrate the usefulness and effectiveness of the methods.
A Bionic Camera-Based Polarization Navigation Sensor
Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai
2014-01-01
Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Spencer; Rodrigues, George, E-mail: george.rodrigues@lhsc.on.ca; Department of Epidemiology/Biostatistics, University of Western Ontario, London
2013-01-01
Purpose: To perform a rigorous technological assessment and statistical validation of a software technology for anatomic delineations of the prostate on MRI datasets. Methods and Materials: A 3-phase validation strategy was used. Phase I consisted of anatomic atlas building using 100 prostate cancer MRI data sets to provide training data sets for the segmentation algorithms. In phase II, 2 experts contoured 15 new MRI prostate cancer cases using 3 approaches (manual, N points, and region of interest). In phase III, 5 new physicians with variable MRI prostate contouring experience segmented the same 15 phase II datasets using 3 approaches: manual,more » N points with no editing, and full autosegmentation with user editing allowed. Statistical analyses for time and accuracy (using Dice similarity coefficient) endpoints used traditional descriptive statistics, analysis of variance, analysis of covariance, and pooled Student t test. Results: In phase I, average (SD) total and per slice contouring time for the 2 physicians was 228 (75), 17 (3.5), 209 (65), and 15 seconds (3.9), respectively. In phase II, statistically significant differences in physician contouring time were observed based on physician, type of contouring, and case sequence. The N points strategy resulted in superior segmentation accuracy when initial autosegmented contours were compared with final contours. In phase III, statistically significant differences in contouring time were observed based on physician, type of contouring, and case sequence again. The average relative timesaving for N points and autosegmentation were 49% and 27%, respectively, compared with manual contouring. The N points and autosegmentation strategies resulted in average Dice values of 0.89 and 0.88, respectively. Pre- and postedited autosegmented contours demonstrated a higher average Dice similarity coefficient of 0.94. Conclusion: The software provided robust contours with minimal editing required. Observed time savings were seen for all physicians irrespective of experience level and baseline manual contouring speed.« less
A fast image matching algorithm based on key points
NASA Astrophysics Data System (ADS)
Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng
2014-05-01
Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.
El-Diasty, Mohammed; Pagiatakis, Spiros
2009-01-01
In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.
Fu, Yu; Pedrini, Giancarlo
2014-01-01
In recent years, optical interferometry-based techniques have been widely used to perform noncontact measurement of dynamic deformation in different industrial areas. In these applications, various physical quantities need to be measured in any instant and the Nyquist sampling theorem has to be satisfied along the time axis on each measurement point. Two types of techniques were developed for such measurements: one is based on high-speed cameras and the other uses a single photodetector. The limitation of the measurement range along the time axis in camera-based technology is mainly due to the low capturing rate, while the photodetector-based technology can only do the measurement on a single point. In this paper, several aspects of these two technologies are discussed. For the camera-based interferometry, the discussion includes the introduction of the carrier, the processing of the recorded images, the phase extraction algorithms in various domains, and how to increase the temporal measurement range by using multiwavelength techniques. For the detector-based interferometry, the discussion mainly focuses on the single-point and multipoint laser Doppler vibrometers and their applications for measurement under extreme conditions. The results show the effort done by researchers for the improvement of the measurement capabilities using interferometry-based techniques to cover the requirements needed for the industrial applications. PMID:24963503
Statistical properties of several models of fractional random point processes
NASA Astrophysics Data System (ADS)
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
Topological photonic crystal with ideal Weyl points
NASA Astrophysics Data System (ADS)
Wang, Luyang; Jian, Shao-Kai; Yao, Hong
Weyl points in three-dimensional photonic crystals behave as monopoles of Berry flux in momentum space. Here, based on symmetry analysis, we show that a minimal number of symmetry-related Weyl points can be realized in time-reversal invariant photonic crystals. We propose to realize these ``ideal'' Weyl points in modified double-gyroid photonic crystals, which is confirmed by our first-principle photonic band-structure calculations. Photonic crystals with ideal Weyl points are qualitatively advantageous in applications such as angular and frequency selectivity, broadband invisibility cloaking, and broadband 3D-imaging.
Trajectory data privacy protection based on differential privacy mechanism
NASA Astrophysics Data System (ADS)
Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong
2018-05-01
In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.
A simple, remote, video based breathing monitor.
Regev, Nir; Wulich, Dov
2017-07-01
Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.
The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes
NASA Astrophysics Data System (ADS)
Bhatnagar, S.; Cornwell, T. J.
2017-11-01
This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.
The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu
This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measuredmore » a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.« less
Northoff, Georg
2016-05-01
William James postulated a "stream of consciousness" that presupposes temporal continuity. The neuronal mechanisms underlying the construction of such temporal continuity remain unclear, however, in my contribution, I propose a neuro-phenomenal hypothesis that is based on slow cortical potentials and their extension of the present moment as described in the phenomenal term of "width of present". More specifically, I focus on the way the brain's neural activity needs to be encoded in order to make possible the "stream of consciousness." This leads us again to the low-frequency fluctuations of the brain's neural activity and more specifically to slow cortical potentials (SCPs). Due to their long phase duration as low-frequency fluctuations, SCPs can integrate different stimuli and their associated neural activity from different regions in one converging region. Such integration may be central for consciousness to occur, as it was recently postulated by He and Raichle. They leave open, however, the question of the exact neuronal mechanisms, like the encoding strategy, that make possible the association of the otherwise purely neuronal SCP with consciousness and its phenomenal features. I hypothesize that SCPs allow for linking and connecting different discrete points in physical time by encoding their statistically based temporal differences rather than the single discrete time points by themselves. This presupposes difference-based coding rather than stimulus-based coding. The encoding of such statistically based temporal differences makes it possible to "go beyond" the merely physical features of the stimuli; that is, their single discrete time points and their conduction delays (as related to their neural processing in the brain). This, in turn, makes possible the constitution of "local temporal continuity" of neural activity in one particular region. The concept of "local temporal continuity" signifies the linkage and integration of different discrete time points into one neural activity in a particular region. How does such local temporal continuity predispose the experience of time in consciousness? For that, I turn to phenomenological philosopher Edmund Husserl and his description of what he calls "inner time consciousness" (Husserl and Brough, 1990). One hallmark of humans' "inner time consciousness" is that we experience events and objects in succession and duration in our consciousness; according to Husserl, this amounts to what he calls the "width of [the] present." The concept of the width of present describes the extension of the present beyond the single discrete time point, such as, for instance, when we perceive different tones as a melody. I now hypothesize the degree of the width of present to be directly dependent upon and thus predisposed by the degree of the temporal differences between two (or more) discrete time points as they are encoded into neural activity. I therefore conclude that the SCPs and their encoding of neural activity in terms of temporal differences must be regarded a neural predisposition of consciousness (NPC) as distinguished from a neural correlate of consciousness (NCC). Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Saberi, Elaheh; Reza Hejazi, S.
2018-02-01
In the present paper, Lie point symmetries of the time-fractional generalized Hirota-Satsuma coupled KdV (HS-cKdV) system based on the Riemann-Liouville derivative are obtained. Using the derived Lie point symmetries, we obtain similarity reductions and conservation laws of the considered system. Finally, some analytic solutions are furnished by means of the invariant subspace method in the Caputo sense.
A Novel Real-Time Reference Key Frame Scan Matching Method.
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-05-07
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
Curve Set Feature-Based Robust and Fast Pose Estimation Algorithm
Hashimoto, Koichi
2017-01-01
Bin picking refers to picking the randomly-piled objects from a bin for industrial production purposes, and robotic bin picking is always used in automated assembly lines. In order to achieve a higher productivity, a fast and robust pose estimation algorithm is necessary to recognize and localize the randomly-piled parts. This paper proposes a pose estimation algorithm for bin picking tasks using point cloud data. A novel descriptor Curve Set Feature (CSF) is proposed to describe a point by the surface fluctuation around this point and is also capable of evaluating poses. The Rotation Match Feature (RMF) is proposed to match CSF efficiently. The matching process combines the idea of the matching in 2D space of origin Point Pair Feature (PPF) algorithm with nearest neighbor search. A voxel-based pose verification method is introduced to evaluate the poses and proved to be more than 30-times faster than the kd-tree-based verification method. Our algorithm is evaluated against a large number of synthetic and real scenes and proven to be robust to noise, able to detect metal parts, more accurately and more than 10-times faster than PPF and Oriented, Unique and Repeatable (OUR)-Clustered Viewpoint Feature Histogram (CVFH). PMID:28771216
NASA Astrophysics Data System (ADS)
Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.
2018-04-01
In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.
Cross-correlation of point series using a new method
NASA Technical Reports Server (NTRS)
Strothers, Richard B.
1994-01-01
Traditional methods of cross-correlation of two time series do not apply to point time series. Here, a new method, devised specifically for point series, utilizes a correlation measure that is based in the rms difference (or, alternatively, the median absolute difference) between nearest neightbors in overlapped segments of the two series. Error estimates for the observed locations of the points, as well as a systematic shift of one series with respect to the other to accommodate a constant, but unknown, lead or lag, are easily incorporated into the analysis using Monte Carlo techniques. A methodological restriction adopted here is that one series be treated as a template series against which the other, called the target series, is cross-correlated. To estimate a significance level for the correlation measure, the adopted alternative (null) hypothesis is that the target series arises from a homogeneous Poisson process. The new method is applied to cross-correlating the times of the greatest geomagnetic storms with the times of maximum in the undecennial solar activity cycle.
On the Motion of Agents across Terrain with Obstacles
NASA Astrophysics Data System (ADS)
Kuznetsov, A. V.
2018-01-01
The paper is devoted to finding the time optimal route of an agent travelling across a region from a given source point to a given target point. At each point of this region, a maximum allowed speed is specified. This speed limit may vary in time. The continuous statement of this problem and the case when the agent travels on a grid with square cells are considered. In the latter case, the time is also discrete, and the number of admissible directions of motion at each point in time is eight. The existence of an optimal solution of this problem is proved, and estimates of the approximate solution obtained on the grid are obtained. It is found that decreasing the size of cells below a certain limit does not further improve the approximation. These results can be used to estimate the quasi-optimal trajectory of the agent motion across the rugged terrain produced by an algorithm based on a cellular automaton that was earlier developed by the author.
A cluster merging method for time series microarray with production values.
Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio
2014-09-01
A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.
NASA Astrophysics Data System (ADS)
Xin, Meiting; Li, Bing; Yan, Xiao; Chen, Lei; Wei, Xiang
2018-02-01
A robust coarse-to-fine registration method based on the backpropagation (BP) neural network and shift window technology is proposed in this study. Specifically, there are three steps: coarse alignment between the model data and measured data, data simplification based on the BP neural network and point reservation in the contour region of point clouds, and fine registration with the reweighted iterative closest point algorithm. In the process of rough alignment, the initial rotation matrix and the translation vector between the two datasets are obtained. After performing subsequent simplification operations, the number of points can be reduced greatly. Therefore, the time and space complexity of the accurate registration can be significantly reduced. The experimental results show that the proposed method improves the computational efficiency without loss of accuracy.
Pressure sensitivity of dual resonant long-period gratings written in boron co-doped optical fiber
NASA Astrophysics Data System (ADS)
Smietana, Mateusz; Bock, Wojtek J.; Mikulic, Predrag; Chen, Jiahua; Wisniewski, Roland
2011-05-01
The paper presents a pressure sensor based on a long-period grating (LPG) written in boron co-doped photosensitive fiber and operating at the phase-matching turning point. It is shown that the pressure sensitivity can be tuned by varying the UV exposure time during the LPG fabrication process. The achieved pressure sensitivity can reach over 1 nm•bar-1, and is at least four times higher than for previously presented gratings working away from the double-resonance regime. In terms of intensity-based measurement, the sensitivity at the turning point can reach 0.212 dB•bar-1.
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
Unsteady three-dimensional thermal field prediction in turbine blades using nonlinear BEM
NASA Technical Reports Server (NTRS)
Martin, Thomas J.; Dulikravich, George S.
1993-01-01
A time-and-space accurate and computationally efficient fully three dimensional unsteady temperature field analysis computer code has been developed for truly arbitrary configurations. It uses boundary element method (BEM) formulation based on an unsteady Green's function approach, multi-point Gaussian quadrature spatial integration on each panel, and a highly clustered time-step integration. The code accepts either temperatures or heat fluxes as boundary conditions that can vary in time on a point-by-point basis. Comparisons of the BEM numerical results and known analytical unsteady results for simple shapes demonstrate very high accuracy and reliability of the algorithm. An example of computed three dimensional temperature and heat flux fields in a realistically shaped internally cooled turbine blade is also discussed.
Transitions between refrigeration regions in extremely short quantum cycles
NASA Astrophysics Data System (ADS)
Feldmann, Tova; Kosloff, Ronnie
2016-05-01
The relation between the geometry of refrigeration cycles and their performance is explored. The model studied is based on a coupled spin system. Small cycle times, termed sudden refrigerators, develop coherence and inner friction. We explore the interplay between coherence and energy of the working medium employing a family of sudden cycles with decreasing cycle times. At the point of maximum coherence the cycle changes geometry. This region of cycle times is characterized by a dissipative resonance where heat is dissipated both to the hot and cold baths. We rationalize the change of geometry of the cycle as a result of a half-integer quantization which maximizes coherence. From this point on, increasing or decreasing the cycle time, eventually leads to refrigeration cycles. The transition point between refrigerators and short circuit cycles is characterized by a transition from finite to singular dynamical temperature. Extremely short cycle times reach a universal limit where all cycles types are equivalent.
Blaya, Joaquín A; Shin, Sonya; Contreras, Carmen; Yale, Gloria; Suarez, Carmen; Asencios, Luis; Kim, Jihoon; Rodriguez, Pablo; Cegielski, Peter; Fraser, Hamish S F
2011-01-01
To evaluate the time to communicate laboratory results to health centers (HCs) between the e-Chasqui web-based information system and the pre-existing paper-based system. Cluster randomized controlled trial in 78 HCs in Peru. In the intervention group, 12 HCs had web access to results via e-Chasqui (point-of-care HCs) and forwarded results to 17 peripheral HCs. In the control group, 22 point-of-care HCs received paper results directly and forwarded them to 27 peripheral HCs. Baseline data were collected for 15 months. Post-randomization data were collected for at least 2 years. Comparisons were made between intervention and control groups, stratified by point-of-care versus peripheral HCs. For point-of-care HCs, the intervention group took less time to receive drug susceptibility tests (DSTs) (median 9 vs 16 days, p<0.001) and culture results (4 vs 8 days, p<0.001) and had a lower proportion of 'late' DSTs taking >60 days to arrive (p<0.001) than the control. For peripheral HCs, the intervention group had similar communication times for DST (median 22 vs 19 days, p=0.30) and culture (10 vs 9 days, p=0.10) results, as well as proportion of 'late' DSTs (p=0.57) compared with the control. Only point-of-care HCs with direct access to the e-Chasqui information system had reduced communication times and fewer results with delays of >2 months. Peripheral HCs had no benefits from the system. This suggests that health establishments should have point-of-care access to reap the benefits of electronic laboratory reporting.
Shin, Sonya; Contreras, Carmen; Yale, Gloria; Suarez, Carmen; Asencios, Luis; Kim, Jihoon; Rodriguez, Pablo; Cegielski, Peter; Fraser, Hamish S F
2010-01-01
Objective To evaluate the time to communicate laboratory results to health centers (HCs) between the e-Chasqui web-based information system and the pre-existing paper-based system. Methods Cluster randomized controlled trial in 78 HCs in Peru. In the intervention group, 12 HCs had web access to results via e-Chasqui (point-of-care HCs) and forwarded results to 17 peripheral HCs. In the control group, 22 point-of-care HCs received paper results directly and forwarded them to 27 peripheral HCs. Baseline data were collected for 15 months. Post-randomization data were collected for at least 2 years. Comparisons were made between intervention and control groups, stratified by point-of-care versus peripheral HCs. Results For point-of-care HCs, the intervention group took less time to receive drug susceptibility tests (DSTs) (median 9 vs 16 days, p<0.001) and culture results (4 vs 8 days, p<0.001) and had a lower proportion of ‘late’ DSTs taking >60 days to arrive (p<0.001) than the control. For peripheral HCs, the intervention group had similar communication times for DST (median 22 vs 19 days, p=0.30) and culture (10 vs 9 days, p=0.10) results, as well as proportion of ‘late’ DSTs (p=0.57) compared with the control. Conclusions Only point-of-care HCs with direct access to the e-Chasqui information system had reduced communication times and fewer results with delays of >2 months. Peripheral HCs had no benefits from the system. This suggests that health establishments should have point-of-care access to reap the benefits of electronic laboratory reporting. PMID:21113076
NASA Technical Reports Server (NTRS)
Hepner, T. E.; Meyers, J. F. (Inventor)
1985-01-01
A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.
ERIC Educational Resources Information Center
Abebe, Dawit Shawel; Torgersen, Leila; Lien, Lars; Hafstad, Gertrud S.; von Soest, Tilmann
2014-01-01
We investigated longitudinal predictors for disordered eating from early adolescence to young adulthood (12-34 years) across gender and different developmental phases among Norwegian young people. Survey data from a population-based sample were collected at four time points (T) over a 13-year time span. A population-based sample of 5,679 females…
Height Accuracy Based on Different Rtk GPS Method for Ultralight Aircraft Images
NASA Astrophysics Data System (ADS)
Tahar, K. N.
2015-08-01
Height accuracy is one of the important elements in surveying work especially for control point's establishment which requires an accurate measurement. There are many methods can be used to acquire height value such as tacheometry, leveling and Global Positioning System (GPS). This study has investigated the effect on height accuracy based on different observations which are single based and network based GPS methods. The GPS network is acquired from the local network namely Iskandar network. This network has been setup to provide real-time correction data to rover GPS station while the single network is based on the known GPS station. Nine ground control points were established evenly at the study area. Each ground control points were observed about two and ten minutes. It was found that, the height accuracy give the different result for each observation.
Modeling seasonal detection patterns for burrowing owl surveys
Quresh S. Latif; Kathleen D. Fleming; Cameron Barrows; John T. Rotenberry
2012-01-01
To guide monitoring of burrowing owls (Athene cunicularia) in the Coachella Valley, California, USA, we analyzed survey-method-specific seasonal variation in detectability. Point-based call-broadcast surveys yielded high early season detectability that then declined through time, whereas detectability on driving surveys increased through the season. Point surveys...
ERIC Educational Resources Information Center
Radunzel, Justine; Noble, Julie
2012-01-01
This study compared the effectiveness of ACT[R] Composite score and high school grade point average (HSGPA) for predicting long-term college success. Outcomes included annual progress towards a degree (based on cumulative credit-bearing hours earned), degree completion, and cumulative grade point average (GPA) at 150% of normal time to degree…
Application of change-point problem to the detection of plant patches.
López, I; Gámez, M; Garay, J; Standovár, T; Varga, Z
2010-03-01
In ecology, if the considered area or space is large, the spatial distribution of individuals of a given plant species is never homogeneous; plants form different patches. The homogeneity change in space or in time (in particular, the related change-point problem) is an important research subject in mathematical statistics. In the paper, for a given data system along a straight line, two areas are considered, where the data of each area come from different discrete distributions, with unknown parameters. In the paper a method is presented for the estimation of the distribution change-point between both areas and an estimate is given for the distributions separated by the obtained change-point. The solution of this problem will be based on the maximum likelihood method. Furthermore, based on an adaptation of the well-known bootstrap resampling, a method for the estimation of the so-called change-interval is also given. The latter approach is very general, since it not only applies in the case of the maximum-likelihood estimation of the change-point, but it can be also used starting from any other change-point estimation known in the ecological literature. The proposed model is validated against typical ecological situations, providing at the same time a verification of the applied algorithms.
Wardley, C Sonia; Applegate, E Brooks; Almaleki, A Deyab; Van Rhee, James A
2016-03-01
A 6-year longitudinal study was conducted to compare the perceived stress experienced during a 2-year master's physician assistant program by 5 cohorts of students enrolled in either problem-based learning (PBL) or lecture-based learning (LBL) curricular tracks. The association of perceived stress with academic achievement was also assessed. Students rated their stress levels on visual analog scales in relation to family obligations, financial concerns, schoolwork, and relocation and overall on 6 occasions throughout the program. A mixed model analysis of variance examined the students' perceived level of stress by curriculum and over time. Regression analysis further examined school work-related stress after controlling for other stressors and possible lag effect of stress from the previous time point. Students reported that overall stress increased throughout the didactic year followed by a decline in the clinical year with statistically significant curricular (PBL versus LBL) and time differences. PBL students also reported significantly more stress resulting from school work than LBL students at some time points. Moreover, when the other measured stressors and possible lag effects were controlled, significant differences between PBL and LBL students' perceived stress related to school work persisted at the 8- and 12-month measurement points. Increased stress in both curricula was associated with higher achievement in overall and individual organ system examination scores. Physician assistant programs that embrace a PBL pedagogy to prepare students to think clinically may need to provide students with additional support through the didactic curriculum.
Real-time seam tracking control system based on line laser visions
NASA Astrophysics Data System (ADS)
Zou, Yanbiao; Wang, Yanbo; Zhou, Weilin; Chen, Xiangzhi
2018-07-01
A set of six-degree-of-freedom robotic welding automatic tracking platform was designed in this study to realize the real-time tracking of weld seams. Moreover, the feature point tracking method and the adaptive fuzzy control algorithm in the welding process were studied and analyzed. A laser vision sensor and its measuring principle were designed and studied, respectively. Before welding, the initial coordinate values of the feature points were obtained using morphological methods. After welding, the target tracking method based on Gaussian kernel was used to extract the real-time feature points of the weld. An adaptive fuzzy controller was designed to input the deviation value of the feature points and the change rate of the deviation into the controller. The quantization factors, scale factor, and weight function were adjusted in real time. The input and output domains, fuzzy rules, and membership functions were constantly updated to generate a series of smooth bias robot voltage. Three groups of experiments were conducted on different types of curve welds in a strong arc and splash noise environment using the welding current of 120 A short-circuit Metal Active Gas (MAG) Arc Welding. The tracking error was less than 0.32 mm and the sensor's metrical frequency can be up to 20 Hz. The end of the torch run smooth during welding. Weld trajectory can be tracked accurately, thereby satisfying the requirements of welding applications.
Johnson, Jeffrey P.; Villard, Sarah; Kiran, Swathi
2017-01-01
Purpose This study was conducted to investigate the static and dynamic relationships between impairment-level cognitive-linguistic abilities and activity-level functional communication skills in persons with aphasia (PWA). Method In Experiment 1, a battery of standardized assessments was administered to a group of PWA (N = 72) to examine associations between cognitive-linguistic ability and functional communication at a single time point. In Experiment 2, impairment-based treatment was administered to a subset of PWA from Experiment 1 (n = 39) in order to examine associations between change in cognitive-linguistic ability and change in function and associations at a single time point. Results In both experiments, numerous significant associations were found between scores on tests of cognitive-linguistic ability and a test of functional communication at a single time point. In Experiment 2, significant treatment-induced gains were seen on both types of measures in participants with more severe aphasia, yet cognitive-linguistic change scores were not significantly correlated with functional communication change scores. Conclusions At a single time point, cognitive-linguistic and functional communication abilities are associated in PWA. However, although changes on standardized assessments reflecting improvements in both types of skills can occur following an impairment-based therapy, these changes may not be significantly associated with each other. PMID:28196373
A novel point cloud registration using 2D image features
NASA Astrophysics Data System (ADS)
Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng
2017-01-01
Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.
Liu, Tianqi; Wang, Jing; Liao, Yipeng; Wang, Xin; Wang, Shanshan
2018-04-30
An all-fiber Mach-Zehnder interferometer (MZI) for two quasi-continuous points' temperature sensing in seawater is proposed. Based on the beam propagation theory, transmission spectrum is designed to present two sets of clear and independent interferences. Following this design, MZI is fabricated and two points' temperature sensing in seawater are demonstrated with sensitivities of 42.69pm/°C and 39.17pm/°C, respectively. By further optimization, sensitivity of 80.91pm/°C can be obtained, which is 3-10 times higher than fiber Bragg gratings and microfiber resonator, and higher than almost all similar MZI based temperature sensors. In addition, factors affecting sensitivities are also discussed and verified in experiment. The two points' temperature sensing demonstrated here show advantages of simple and compact construction, robust structure, easy fabrication, high sensitivity, immunity to salinity and tunable distance of 1-20 centimeters between two points, which may provide references for macroscopic oceanic research and other sensing applications based on MZIs.
Pointing Device Performance in Steering Tasks.
Senanayake, Ransalu; Goonetilleke, Ravindra S
2016-06-01
Use of touch-screen-based interactions is growing rapidly. Hence, knowing the maneuvering efficacy of touch screens relative to other pointing devices is of great importance in the context of graphical user interfaces. Movement time, accuracy, and user preferences of four pointing device settings were evaluated on a computer with 14 participants aged 20.1 ± 3.13 years. It was found that, depending on the difficulty of the task, the optimal settings differ for ballistic and visual control tasks. With a touch screen, resting the arm increased movement time for steering tasks. When both performance and comfort are considered, whether to use a mouse or a touch screen for person-computer interaction depends on the steering difficulty. Hence, a input device should be chosen based on the application, and should be optimized to match the graphical user interface. © The Author(s) 2016.
Research on fully distributed optical fiber sensing security system localization algorithm
NASA Astrophysics Data System (ADS)
Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen
2013-12-01
A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.
Research on facial expression simulation based on depth image
NASA Astrophysics Data System (ADS)
Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao
2017-11-01
Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.
Behavioral Effects of a Locomotor-Based Physical Activity Intervention in Preschoolers.
Burkart, Sarah; Roberts, Jasmin; Davidson, Matthew C; Alhassan, Sofiya
2018-01-01
Poor adaptive learning behaviors (ie, distractibility, inattention, and disruption) are associated with behavior problems and underachievement in school, as well as indicating potential attention-deficit hyperactivity disorder. Strategies are needed to limit these behaviors. Physical activity (PA) has been suggested to improve behavior in school-aged children, but little is known about this relationship in preschoolers. This study examined the effects of a PA intervention on classroom behaviors in preschool-aged children. Eight preschool classrooms (n = 71 children; age = 3.8 ± 0.7 y) with children from low socioeconomic environments were randomized to a locomotor-based PA (LB-PA) or unstructured free playtime (UF-PA) group. Both interventions were implemented by classroom teachers and delivered for 30 minutes per day, 5 days per week for 6 months. Classroom behavior was measured in both groups at 3 time points, whereas PA was assessed at 2 time points over a 6-month period and analyzed with hierarchical linear modeling. Linear growth models showed significant decreases in hyperactivity (LB-PA: -2.58 points, P = .001; UF-PA: 2.33 points, P = .03), aggression (LB-PA: -2.87 points, P = .01; UF-PA: 0.97 points, P = .38) and inattention (LB-PA: 1.59 points, P < .001; UF-PA: 3.91 points, P < .001). This research provides promising evidence for the efficacy of LB-PA as a strategy to improve classroom behavior in preschoolers.
ERIC Educational Resources Information Center
Park, Sanghoon
2017-01-01
This paper reports the findings of a comparative analysis of online learner behavioral interactions, time-on-task, attendance, and performance at different points throughout a semester (beginning, during, and end) based on two online courses: one course offering authentic discussion-based learning activities and the other course offering authentic…
Technical and economic feasibility of integrated video service by satellite
NASA Technical Reports Server (NTRS)
Price, Kent M.; Garlow, R. K.; Henderson, T. R.; Kwan, Robert K.; White, L. W.
1992-01-01
The trends and roles of satellite based video services in the year 2010 time frame are examined based on an overall network and service model for that period. Emphasis is placed on point to point and multipoint service, but broadcast could also be accommodated. An estimate of the video traffic is made and the service and general network requirements are identified. User charges are then estimated based on several usage scenarios. In order to accommodate these traffic needs, a 28 spot beam satellite architecture with on-board processing and signal mixing is suggested.
Topological photonic crystal with equifrequency Weyl points
NASA Astrophysics Data System (ADS)
Wang, Luyang; Jian, Shao-Kai; Yao, Hong
2016-06-01
Weyl points in three-dimensional photonic crystals behave as monopoles of Berry flux in momentum space. Here, based on general symmetry analysis, we show that a minimal number of four symmetry-related (consequently equifrequency) Weyl points can be realized in time-reversal invariant photonic crystals. We further propose an experimentally feasible way to modify double-gyroid photonic crystals to realize four equifrequency Weyl points, which is explicitly confirmed by our first-principle photonic band-structure calculations. Remarkably, photonic crystals with equifrequency Weyl points are qualitatively advantageous in applications including angular selectivity, frequency selectivity, invisibility cloaking, and three-dimensional imaging.
a Modeling Method of Fluttering Leaves Based on Point Cloud
NASA Astrophysics Data System (ADS)
Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.
2017-09-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
NASA Technical Reports Server (NTRS)
Zhou, Wei
1993-01-01
In the high accurate measurement of periodic signals, the greatest common factor frequency and its characteristics have special functions. A method of time difference measurement - the time difference method by dual 'phase coincidence points' detection is described. This method utilizes the characteristics of the greatest common factor frequency to measure time or phase difference between periodic signals. It can suit a very wide frequency range. Measurement precision and potential accuracy of several picoseconds were demonstrated with this new method. The instrument based on this method is very simple, and the demand for the common oscillator is low. This method and instrument can be used widely.
Medical student psychological distress and academic performance.
Dendle, Claire; Baulch, Julie; Pellicano, Rebecca; Hay, Margaret; Lichtwark, Irene; Ayoub, Sally; Clarke, David M; Morand, Eric F; Kumar, Arunaz; Leech, Michelle; Horne, Kylie
2018-01-21
The impact of medical student psychological distress on academic performance has not been systematically examined. This study provided an opportunity to closely examine the potential impacts of workplace and study related stress factors on student's psychological distress and their academic performance during their first clinical year. This one-year prospective cohort study was performed at a tertiary hospital based medical school in Melbourne, Australia. Students completed a questionnaire at three time points during the year. The questionnaire included the validated Kessler psychological distress scale (K10) and the General Health Questionnaire-28 (GHQ-28), as well as items about sources of workplace stress. Academic outcome scores were aggregated and correlated with questionnaire results. One hundred and twenty six students participated; 126 (94.7%), 102 (76.7%), and 99 (74.4%) at time points one, two, and three, respectively. 33.1% reported psychological distress at time point one, increasing to 47.4% at time point three. There was no correlation between the K10 scores and academic performance. There was weak negative correlation between the GHQ-28 at time point three and academic performance. Keeping up to date with knowledge, need to do well and fear of negative feedback were the most common workplace stress factors. Poor correlation was noted between psychological distress and academic performance.
NASA Astrophysics Data System (ADS)
Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U.
2014-08-01
For construction progress monitoring a planned state of the construction at a certain time (as-planed) has to be compared to the actual state (as-built). The as-planed state is derived from a building information model (BIM), which contains the geometry of the building and the construction schedule. In this paper we introduce an approach for the generation of an as-built point cloud by photogrammetry. It is regarded that that images on a construction cannot be taken from everywhere it seems to be necessary. Because of this we use a combination of structure from motion process together with control points to create a scaled point cloud in a consistent coordinate system. Subsequently this point cloud is used for an as-built - as-planed comparison. For that voxels of an octree are marked as occupied, free or unknown by raycasting based on the triangulated points and the camera positions. This allows to identify not existing building parts. For the verification of the existence of building parts a second test based on the points in front and behind the as-planed model planes is performed. The proposed procedure is tested based on an inner city construction site under real conditions.
Stiletto, R; Röthke, M; Schäfer, E; Lefering, R; Waydhas, Ch
2006-10-01
Patient security has become one of the major aspects of clinical management in recent years. The crucial point in research was focused on malpractice. In contradiction to the economic process in non medical fields, the analysis of errors during the in-patient treatment time was neglected. Patient risk management can be defined as a structured procedure in a clinical unit with the aim to reduce harmful events. A risk point model was created based on a Delphi process and founded on the DIVI data register. The risk point model was evaluated in clinically working ICU departments participating in the register data base. The results of the risk point evaluation will be integrated in the next data base update. This might be a step to improve the reliability of the register to measure quality assessment in the ICU.
Sparse electrocardiogram signals recovery based on solving a row echelon-like form of system.
Cai, Pingmei; Wang, Guinan; Yu, Shiwei; Zhang, Hongjuan; Ding, Shuxue; Wu, Zikai
2016-02-01
The study of biology and medicine in a noise environment is an evolving direction in biological data analysis. Among these studies, analysis of electrocardiogram (ECG) signals in a noise environment is a challenging direction in personalized medicine. Due to its periodic characteristic, ECG signal can be roughly regarded as sparse biomedical signals. This study proposes a two-stage recovery algorithm for sparse biomedical signals in time domain. In the first stage, the concentration subspaces are found in advance. Then by exploiting these subspaces, the mixing matrix is estimated accurately. In the second stage, based on the number of active sources at each time point, the time points are divided into different layers. Next, by constructing some transformation matrices, these time points form a row echelon-like system. After that, the sources at each layer can be solved out explicitly by corresponding matrix operations. It is noting that all these operations are conducted under a weak sparse condition that the number of active sources is less than the number of observations. Experimental results show that the proposed method has a better performance for sparse ECG signal recovery problem.
Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation
NASA Astrophysics Data System (ADS)
Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.
2017-12-01
When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Liu, Yuan; Liang, Fuxun; Wang, Yongjun
2017-04-01
In recent years, updating the inventory of road infrastructures based on field work is labor intensive, time consuming, and costly. Fortunately, vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. However, robust recognition of road facilities from huge volumes of 3D point clouds is still a challenging issue because of complicated and incomplete structures, occlusions and varied point densities. Most existing methods utilize point or object based features to recognize object candidates, and can only extract limited types of objects with a relatively low recognition rate, especially for incomplete and small objects. To overcome these drawbacks, this paper proposes a semantic labeling framework by combing multiple aggregation levels (point-segment-object) of features and contextual features to recognize road facilities, such as road surfaces, road boundaries, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, and cars, for highway infrastructure inventory. The proposed method first identifies ground and non-ground points, and extracts road surfaces facilities from ground points. Non-ground points are segmented into individual candidate objects based on the proposed multi-rule region growing method. Then, the multiple aggregation levels of features and the contextual features (relative positions, relative directions, and spatial patterns) associated with each candidate object are calculated and fed into a SVM classifier to label the corresponding candidate object. The recognition performance of combining multiple aggregation levels and contextual features was compared with single level (point, segment, or object) based features using large-scale highway scene point clouds. Comparative studies demonstrated that the proposed semantic labeling framework significantly improves road facilities recognition precision (90.6%) and recall (91.2%), particularly for incomplete and small objects.
Trinka, Eugen; Cock, Hannah; Hesdorffer, Dale; Rossetti, Andrea O; Scheffer, Ingrid E; Shinnar, Shlomo; Shorvon, Simon; Lowenstein, Daniel H
2015-10-01
The Commission on Classification and Terminology and the Commission on Epidemiology of the International League Against Epilepsy (ILAE) have charged a Task Force to revise concepts, definition, and classification of status epilepticus (SE). The proposed new definition of SE is as follows: Status epilepticus is a condition resulting either from the failure of the mechanisms responsible for seizure termination or from the initiation of mechanisms, which lead to abnormally, prolonged seizures (after time point t1 ). It is a condition, which can have long-term consequences (after time point t2 ), including neuronal death, neuronal injury, and alteration of neuronal networks, depending on the type and duration of seizures. This definition is conceptual, with two operational dimensions: the first is the length of the seizure and the time point (t1 ) beyond which the seizure should be regarded as "continuous seizure activity." The second time point (t2 ) is the time of ongoing seizure activity after which there is a risk of long-term consequences. In the case of convulsive (tonic-clonic) SE, both time points (t1 at 5 min and t2 at 30 min) are based on animal experiments and clinical research. This evidence is incomplete, and there is furthermore considerable variation, so these time points should be considered as the best estimates currently available. Data are not yet available for other forms of SE, but as knowledge and understanding increase, time points can be defined for specific forms of SE based on scientific evidence and incorporated into the definition, without changing the underlying concepts. A new diagnostic classification system of SE is proposed, which will provide a framework for clinical diagnosis, investigation, and therapeutic approaches for each patient. There are four axes: (1) semiology; (2) etiology; (3) electroencephalography (EEG) correlates; and (4) age. Axis 1 (semiology) lists different forms of SE divided into those with prominent motor systems, those without prominent motor systems, and currently indeterminate conditions (such as acute confusional states with epileptiform EEG patterns). Axis 2 (etiology) is divided into subcategories of known and unknown causes. Axis 3 (EEG correlates) adopts the latest recommendations by consensus panels to use the following descriptors for the EEG: name of pattern, morphology, location, time-related features, modulation, and effect of intervention. Finally, axis 4 divides age groups into neonatal, infancy, childhood, adolescent and adulthood, and elderly. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.
Moving Force Identification: a Time Domain Method
NASA Astrophysics Data System (ADS)
Law, S. S.; Chan, T. H. T.; Zeng, Q. H.
1997-03-01
The solution for the vertical dynamic interaction forces between a moving vehicle and the bridge deck is analytically derived and experimentally verified. The deck is modelled as a simply supported beam with viscous damping, and the vehicle/bridge interaction force is modelled as one-point or two-point loads with fixed axle spacing, moving at constant speed. The method is based on modal superposition and is developed to identify the forces in the time domain. Both cases of one-point and two-point forces moving on a simply supported beam are simulated. Results of laboratory tests on the identification of the vehicle/bridge interaction forces are presented. Computation simulations and laboratory tests show that the method is effective, and acceptable results can be obtained by combining the use of bending moment and acceleration measurements.
NASA Astrophysics Data System (ADS)
Liu, P.
2013-12-01
Quantitative analysis of the risk for reservoir real-time operation is a hard task owing to the difficulty of accurate description of inflow uncertainties. The ensemble-based hydrologic forecasts directly depict the inflows not only the marginal distributions but also their persistence via scenarios. This motivates us to analyze the reservoir real-time operating risk with ensemble-based hydrologic forecasts as inputs. A method is developed by using the forecast horizon point to divide the future time into two stages, the forecast lead-time and the unpredicted time. The risk within the forecast lead-time is computed based on counting the failure number of forecast scenarios, and the risk in the unpredicted time is estimated using reservoir routing with the design floods and the reservoir water levels of forecast horizon point. As a result, a two-stage risk analysis method is set up to quantify the entire flood risks by defining the ratio of the number of scenarios that excessive the critical value to the total number of scenarios. The China's Three Gorges Reservoir (TGR) is selected as a case study, where the parameter and precipitation uncertainties are implemented to produce ensemble-based hydrologic forecasts. The Bayesian inference, Markov Chain Monte Carlo, is used to account for the parameter uncertainty. Two reservoir operation schemes, the real operated and scenario optimization, are evaluated for the flood risks and hydropower profits analysis. With the 2010 flood, it is found that the improvement of the hydrologic forecast accuracy is unnecessary to decrease the reservoir real-time operation risk, and most risks are from the forecast lead-time. It is therefore valuable to decrease the avarice of ensemble-based hydrologic forecasts with less bias for a reservoir operational purpose.
Development of a point-kinetic verification scheme for nuclear reactor applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demazière, C., E-mail: demaz@chalmers.se; Dykin, V.; Jareteg, K.
In this paper, a new method that can be used for checking the proper implementation of time- or frequency-dependent neutron transport models and for verifying their ability to recover some basic reactor physics properties is proposed. This method makes use of the application of a stationary perturbation to the system at a given frequency and extraction of the point-kinetic component of the system response. Even for strongly heterogeneous systems for which an analytical solution does not exist, the point-kinetic component follows, as a function of frequency, a simple analytical form. The comparison between the extracted point-kinetic component and its expectedmore » analytical form provides an opportunity to verify and validate neutron transport solvers. The proposed method is tested on two diffusion-based codes, one working in the time domain and the other working in the frequency domain. As long as the applied perturbation has a non-zero reactivity effect, it is demonstrated that the method can be successfully applied to verify and validate time- or frequency-dependent neutron transport solvers. Although the method is demonstrated in the present paper in a diffusion theory framework, higher order neutron transport methods could be verified based on the same principles.« less
A QRS Detection and R Point Recognition Method for Wearable Single-Lead ECG Devices.
Chen, Chieh-Li; Chuang, Chun-Te
2017-08-26
In the new-generation wearable Electrocardiogram (ECG) system, signal processing with low power consumption is required to transmit data when detecting dangerous rhythms and to record signals when detecting abnormal rhythms. The QRS complex is a combination of three of the graphic deflection seen on a typical ECG. This study proposes a real-time QRS detection and R point recognition method with low computational complexity while maintaining a high accuracy. The enhancement of QRS segments and restraining of P and T waves are carried out by the proposed ECG signal transformation, which also leads to the elimination of baseline wandering. In this study, the QRS fiducial point is determined based on the detected crests and troughs of the transformed signal. Subsequently, the R point can be recognized based on four QRS waveform templates and preliminary heart rhythm classification can be also achieved at the same time. The performance of the proposed approach is demonstrated using the benchmark of the MIT-BIH Arrhythmia Database, where the QRS detected sensitivity (Se) and positive prediction (+P) are 99.82% and 99.81%, respectively. The result reveals the approach's advantage of low computational complexity, as well as the feasibility of the real-time application on a mobile phone and an embedded system.
An Optimal Parameter Discretization Strategy for Multiple Model Adaptive Estimation and Control
1989-12-01
Zicker . MMAE-Based Control with Space- Time Point Process Observations. IEEE Transactions on Aerospace and Elec- tronic Systems, AES-21 (3):292-300, 1985...Transactions of the Conference of Army Math- ematicians, Bethesda MD, 1982. (AD-POO1 033). 65. William L. Zicker . Pointing and Tracking of Particle
A Novel Real-Time Reference Key Frame Scan Matching Method
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-01-01
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285
Novel Linkage of Individual and Geographic Data to Study Firearm Violence
Branas, Charles C.; Culhane, Dennis; Richmond, Therese S.; Wiebe, Douglas J.
2010-01-01
Firearm violence is the end result of a causative web of individual-level and geographic risk factors. Few, if any, studies of firearm violence have been able to simultaneously determine the population-based relative risks that individuals experience as a result of what they were doing at a specific point in time and where they were, geographically, at a specific point in time. This paper describes the linkage of individual and geographic data that was undertaken as part of a population-based case-control study of firearm violence in Philadelphia. New methods and applications of these linked data relevant to researchers and policymakers interested in firearm violence are also discussed. PMID:20617158
Garashchuk, Sophya; Rassolov, Vitaly A
2008-07-14
Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.
Brownian motion of boomerang colloidal particles.
Chakrabarty, Ayan; Konya, Andrew; Wang, Feng; Selinger, Jonathan V; Sun, Kai; Wei, Qi-Huo
2013-10-18
We investigate the Brownian motion of boomerang colloidal particles confined between two glass plates. Our experimental observations show that the mean displacements are biased towards the center of hydrodynamic stress (CoH), and that the mean-square displacements exhibit a crossover from short-time faster to long-time slower diffusion with the short-time diffusion coefficients dependent on the points used for tracking. A model based on Langevin theory elucidates that these behaviors are ascribed to the superposition of two diffusive modes: the ellipsoidal motion of the CoH and the rotational motion of the tracking point with respect to the CoH.
Brownian Motion of Boomerang Colloidal Particles
NASA Astrophysics Data System (ADS)
Chakrabarty, Ayan; Konya, Andrew; Wang, Feng; Selinger, Jonathan V.; Sun, Kai; Wei, Qi-Huo
2013-10-01
We investigate the Brownian motion of boomerang colloidal particles confined between two glass plates. Our experimental observations show that the mean displacements are biased towards the center of hydrodynamic stress (CoH), and that the mean-square displacements exhibit a crossover from short-time faster to long-time slower diffusion with the short-time diffusion coefficients dependent on the points used for tracking. A model based on Langevin theory elucidates that these behaviors are ascribed to the superposition of two diffusive modes: the ellipsoidal motion of the CoH and the rotational motion of the tracking point with respect to the CoH.
Developing points-based risk-scoring systems in the presence of competing risks.
Austin, Peter C; Lee, Douglas S; D'Agostino, Ralph B; Fine, Jason P
2016-09-30
Predicting the occurrence of an adverse event over time is an important issue in clinical medicine. Clinical prediction models and associated points-based risk-scoring systems are popular statistical methods for summarizing the relationship between a multivariable set of patient risk factors and the risk of the occurrence of an adverse event. Points-based risk-scoring systems are popular amongst physicians as they permit a rapid assessment of patient risk without the use of computers or other electronic devices. The use of such points-based risk-scoring systems facilitates evidence-based clinical decision making. There is a growing interest in cause-specific mortality and in non-fatal outcomes. However, when considering these types of outcomes, one must account for competing risks whose occurrence precludes the occurrence of the event of interest. We describe how points-based risk-scoring systems can be developed in the presence of competing events. We illustrate the application of these methods by developing risk-scoring systems for predicting cardiovascular mortality in patients hospitalized with acute myocardial infarction. Code in the R statistical programming language is provided for the implementation of the described methods. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
New clinical insights for transiently evoked otoacoustic emission protocols.
Hatzopoulos, Stavros; Grzanka, Antoni; Martini, Alessandro; Konopka, Wieslaw
2009-08-01
The objective of the study was to optimize the area of a time-frequency analysis and then investigate any stable patterns in the time-frequency structure of otoacoustic emissions in a population of 152 healthy adults sampled over one year. TEOAE recordings were collected from 302 ears in subjects presenting normal hearing and normal impedance values. The responses were analyzed by the Wigner-Ville distribution (WVD). The TF region of analysis was optimized by examining the energy content of various rectangular and triangular TF regions. The TEOAE components from the initial and recordings 12 months later were compared in the optimized TF region. The best region for TF analysis was identified with base point 1 at 2.24 ms and 2466 Hz, base point 2 at 6.72 ms and 2466 Hz, and the top point at 2.24 ms and 5250 Hz. Correlation indices from the TF optimized region were higher, and were statistically significant, than the traditional indices in the selected time window. An analysis of the TF data within a 12-month period indicated a 85% TEOAE component similarity in 90% of the tested subjects.
Cabrieto, Jedelyn; Tuerlinckx, Francis; Kuppens, Peter; Hunyadi, Borbála; Ceulemans, Eva
2018-01-15
Detecting abrupt correlation changes in multivariate time series is crucial in many application fields such as signal processing, functional neuroimaging, climate studies, and financial analysis. To detect such changes, several promising correlation change tests exist, but they may suffer from severe loss of power when there is actually more than one change point underlying the data. To deal with this drawback, we propose a permutation based significance test for Kernel Change Point (KCP) detection on the running correlations. Given a requested number of change points K, KCP divides the time series into K + 1 phases by minimizing the within-phase variance. The new permutation test looks at how the average within-phase variance decreases when K increases and compares this to the results for permuted data. The results of an extensive simulation study and applications to several real data sets show that, depending on the setting, the new test performs either at par or better than the state-of-the art significance tests for detecting the presence of correlation changes, implying that its use can be generally recommended.
Dziarmaga, Jacek; Zurek, Wojciech H.
2014-01-01
Kibble-Zurek mechanism (KZM) uses critical scaling to predict density of topological defects and other excitations created in second order phase transitions. We point out that simply inserting asymptotic critical exponents deduced from the immediate vicinity of the critical point to obtain predictions can lead to results that are inconsistent with a more careful KZM analysis based on causality – on the comparison of the relaxation time of the order parameter with the “time distance” from the critical point. As a result, scaling of quench-generated excitations with quench rates can exhibit behavior that is locally (i.e., in the neighborhood of any given quench rate) well approximated by the power law, but with exponents that depend on that rate, and that are quite different from the naive prediction based on the critical exponents relevant for asymptotically long quench times. Kosterlitz-Thouless scaling (that governs e.g. Mott insulator to superfluid transition in the Bose-Hubbard model in one dimension) is investigated as an example of this phenomenon. PMID:25091996
Reduction of admit wait times: the effect of a leadership-based program.
Patel, Pankaj B; Combs, Mary A; Vinson, David R
2014-03-01
Prolonged admit wait times in the emergency department (ED) for patients who require hospitalization lead to increased boarding time in the ED, a significant cause of ED congestion. This is associated with decreased quality of care, higher morbidity and mortality, decreased patient satisfaction, increased costs for care, ambulance diversion, higher numbers of patients who leave without being seen (LWBS), and delayed care with longer lengths of stay (LOS) for other ED patients. The objective was to assess the effect of a leadership-based program to expedite hospital admissions from the ED. This before-and-after observational study was undertaken from 2006 through 2011 at one community hospital ED. A team of ED and hospital leaders implemented a program to reduce admit wait times, using a computerized hospital-wide tracking system to monitor inpatient and ED bed status. The team collaboratively and consistently moved ED patients to their inpatient beds within an established goal of 60 minutes after an admission decision was reached. Top leadership actively intervened in real time by contacting staff whenever delays occurred to expedite immediate solutions to achieve the 60-minute goal. The primary outcome measures were the percentage of ED patients who were admitted to inpatient beds within 60 minutes from the time the beds were requested and ED boarding time. LOS, patient satisfaction, LWBS rate, and ambulance diversion hours were also measured. After ED census, hospital admission rates, and ED bed capacity were controlled for using a multivariable linear regression analysis, the admit wait time reduction program contributed to an increase in patients being admitted to the hospital within 60 minutes by 16 percentage points (95% confidence intervals [CI] = 10 to 22 points; p < 0.0001) and a decrease in boarding time per admission of 46 minutes (95% CI = 63 to 82 minutes; p < 0.0001). LOS decreased for admitted patients by 79 minutes (95% CI = 55 to 104 minutes; p < 0.0001), for discharged patients by 17 minutes (95% CI = 12 to 23 minutes; p < 0.0001), and for all patients by 34 minutes (95% CI = 25 to 43 minutes; p < 0.0001). Patient satisfaction increased 4.9 percentage points (95% CI = 3.8 to 6.0 points; p < 0.0001). LWBS patients decreased 0.9 percentage points (95% CI = 0.6 to 1.2 points; p < 0.0001) and monthly ambulance diversion decreased 8.2 hours (95% CI = 4.6 to 11.8 hours; p < 0.0001). A leadership-based program to reduce admit wait times and boarding times was associated with a significant increase in the percentage of patients admitted to the hospital within 60 minutes and a significant decrease in boarding time. Also associated with the program were decreased ED LOS, LWBS rate, and ambulance diversion, as well as increased patient satisfaction. © 2014 by the Society for Academic Emergency Medicine.
Hossner, Ernst-Joachim; Ehrlenspiel, Felix
2010-01-01
The paralysis-by-analysis phenomenon, i.e., attending to the execution of one's movement impairs performance, has gathered a lot of attention over recent years (see Wulf, 2007, for a review). Explanations of this phenomenon, e.g., the hypotheses of constrained action (Wulf et al., 2001) or of step-by-step execution (Masters, 1992; Beilock et al., 2002), however, do not refer to the level of underlying mechanisms on the level of sensorimotor control. For this purpose, a “nodal-point hypothesis” is presented here with the core assumption that skilled motor behavior is internally based on sensorimotor chains of nodal points, that attending to intermediate nodal points leads to a muscular re-freezing of the motor system at exactly and exclusively these points in time, and that this re-freezing is accompanied by the disruption of compensatory processes, resulting in an overall decrease of motor performance. Two experiments, on lever sequencing and basketball free throws, respectively, are reported that successfully tested these time-referenced predictions, i.e., showing that muscular activity is selectively increased and compensatory variability selectively decreased at movement-related nodal points if these points are in the focus of attention. PMID:21833285
2013-01-01
Background The stigma of mental illness among medical students is a prevalent concern that has far reaching negative consequences. Attempts to combat this stigma through educational initiatives have had mixed results. This study examined the impact of a one-time contact-based educational intervention on the stigma of mental illness among medical students and compared this with a multimodal undergraduate psychiatry course at the University of Calgary, Canada that integrates contact-based educational strategies. Attitudes towards mental illness were compared with those towards type 2 diabetes mellitus (T2DM). Method A cluster-randomized trial design was used to evaluate the impact of contact-based educational interventions delivered at two points in time. The impact was assessed by collecting data at 4 time points using the Opening Minds Scale for Health Care Providers (OMS-HC) to assess changes in stigma. Results Baseline surveys were completed by 62% (n=111) of students before the start of the course and post-intervention ratings were available from 90 of these. Stigma scores for both groups were significantly reduced upon course completion (p < 0.0001), but were not significantly changed following the one-time contact based educational intervention in the primary analysis. Student confidence in working with people with a mental illness and interest in a psychiatric career was increased at the end of the course. Stigma towards mental illness remained greater than for T2DM at all time points. Conclusions Psychiatric education can decrease the stigma of mental illness and increase student confidence. However, one-time, contact-based educational interventions require further evaluation in this context. The key components are postulated to be contact, knowledge and attention to process, where attending to the student’s internal experience of working with people with mental illness is an integral factor in modulating perceptions of mental illness and a psychiatric career. PMID:24156397
Papish, Andriyka; Kassam, Aliya; Modgill, Geeta; Vaz, Gina; Zanussi, Lauren; Patten, Scott
2013-10-24
The stigma of mental illness among medical students is a prevalent concern that has far reaching negative consequences. Attempts to combat this stigma through educational initiatives have had mixed results. This study examined the impact of a one-time contact-based educational intervention on the stigma of mental illness among medical students and compared this with a multimodal undergraduate psychiatry course at the University of Calgary, Canada that integrates contact-based educational strategies. Attitudes towards mental illness were compared with those towards type 2 diabetes mellitus (T2DM). A cluster-randomized trial design was used to evaluate the impact of contact-based educational interventions delivered at two points in time. The impact was assessed by collecting data at 4 time points using the Opening Minds Scale for Health Care Providers (OMS-HC) to assess changes in stigma. Baseline surveys were completed by 62% (n=111) of students before the start of the course and post-intervention ratings were available from 90 of these. Stigma scores for both groups were significantly reduced upon course completion (p < 0.0001), but were not significantly changed following the one-time contact based educational intervention in the primary analysis. Student confidence in working with people with a mental illness and interest in a psychiatric career was increased at the end of the course. Stigma towards mental illness remained greater than for T2DM at all time points. Psychiatric education can decrease the stigma of mental illness and increase student confidence. However, one-time, contact-based educational interventions require further evaluation in this context. The key components are postulated to be contact, knowledge and attention to process, where attending to the student's internal experience of working with people with mental illness is an integral factor in modulating perceptions of mental illness and a psychiatric career.
Validation of Accelerometer Cut-Points in Children With Cerebral Palsy Aged 4 to 5 Years.
Keawutan, Piyapa; Bell, Kristie L; Oftedal, Stina; Davies, Peter S W; Boyd, Roslyn N
2016-01-01
To derive and validate triaxial accelerometer cut-points in children with cerebral palsy (CP) and compare these with previously established cut-points in children with typical development. Eighty-four children with CP aged 4 to 5 years wore the ActiGraph during a play-based gross motor function measure assessment that was video-taped for direct observation. Receiver operating characteristic and Bland-Altman plots were used for analyses. The ActiGraph had good classification accuracy in Gross Motor Function Classification System (GMFCS) levels III and V and fair classification accuracy in GMFCS levels I, II, and IV. These results support the use of the previously established cut-points for sedentary time of 820 counts per minute in children with CP aged 4 to 5 years across all functional abilities. The cut-point provides an objective measure of sedentary and active time in children with CP. The cut-point is applicable to group data but not for individual children.
NASA Astrophysics Data System (ADS)
Yang, C. H.; Kenduiywo, B. K.; Soergel, U.
2016-06-01
Persistent Scatterer Interferometry (PSI) is a technique to detect a network of extracted persistent scatterer (PS) points which feature temporal phase stability and strong radar signal throughout time-series of SAR images. The small surface deformations on such PS points are estimated. PSI particularly works well in monitoring human settlements because regular substructures of man-made objects give rise to large number of PS points. If such structures and/or substructures substantially alter or even vanish due to big change like construction, their PS points are discarded without additional explorations during standard PSI procedure. Such rejected points are called big change (BC) points. On the other hand, incoherent change detection (ICD) relies on local comparison of multi-temporal images (e.g. image difference, image ratio) to highlight scene modifications of larger size rather than detail level. However, image noise inevitably degrades ICD accuracy. We propose a change detection approach based on PSI to synergize benefits of PSI and ICD. PS points are extracted by PSI procedure. A local change index is introduced to quantify probability of a big change for each point. We propose an automatic thresholding method adopting change index to extract BC points along with a clue of the period they emerge. In the end, PS ad BC points are integrated into a change detection image. Our method is tested at a site located around north of Berlin main station where steady, demolished, and erected building substructures are successfully detected. The results are consistent with ground truth derived from time-series of aerial images provided by Google Earth. In addition, we apply our technique for traffic infrastructure, business district, and sports playground monitoring.
Parametric motion control of robotic arms: A biologically based approach using neural networks
NASA Technical Reports Server (NTRS)
Bock, O.; D'Eleuterio, G. M. T.; Lipitkas, J.; Grodski, J. J.
1993-01-01
A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements.
Answering questions at the point of care: do residents practice EBM or manage information sources?
McCord, Gary; Smucker, William D; Selius, Brian A; Hannan, Scott; Davidson, Elliot; Schrop, Susan Labuda; Rao, Vinod; Albrecht, Paula
2007-03-01
To determine the types of information sources that evidence-based medicine (EBM)-trained, family medicine residents use to answer clinical questions at the point of care, to assess whether the sources are evidence-based, and to provide suggestions for more effective information-management strategies in residency training. In 2005, trained medical students directly observed (for two half-days per physician) how 25 third-year family medicine residents retrieved information to answer clinical questions arising at the point of care and documented the type and name of each source, the retrieval location, and the estimated time spent consulting the source. An end-of-study questionnaire asked 37 full-time faculty and the participating residents about the best information sources available, subscriptions owned, why they use a personal digital assistant (PDA) to practice medicine, and their experience in preventing medical errors using a PDA. Forty-four percent of questions were answered by attending physicians, 23% by consulting PDAs, and 20% from books. Seventy-two percent of questions were answered within two minutes. Residents rated UptoDate as the best source for evidence-based information, but they used this source only five times. PDAs were used because of ease of use, time factors, and accessibility. All examples of medical errors discovered or prevented with PDA programs were medication related. None of the participants' residencies required the use of a specific medical information resource. The results support the Agency for Health Care Research and Quality's call for medical system improvements at the point of care. Additionally, it may be necessary to teach residents better information-management skills in addition to EBM skills.
A fast point-cloud computing method based on spatial symmetry of Fresnel field
NASA Astrophysics Data System (ADS)
Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui
2017-10-01
Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.
GOES-R active vibration damping controller design, implementation, and on-orbit performance
NASA Astrophysics Data System (ADS)
Clapp, Brian R.; Weigl, Harald J.; Goodzeit, Neil E.; Carter, Delano R.; Rood, Timothy J.
2018-01-01
GOES-R series spacecraft feature a number of flexible appendages with modal frequencies below 3.0 Hz which, if excited by spacecraft disturbances, can be sources of undesirable jitter perturbing spacecraft pointing. To meet GOES-R pointing stability requirements, the spacecraft flight software implements an Active Vibration Damping (AVD) rate control law which acts in parallel with the nadir point attitude control law. The AVD controller commands spacecraft reaction wheel actuators based upon Inertial Measurement Unit (IMU) inputs to provide additional damping for spacecraft structural modes below 3.0 Hz which vary with solar wing angle. A GOES-R spacecraft dynamics and attitude control system identified model is constructed from pseudo-random reaction wheel torque commands and IMU angular rate response measurements occurring over a single orbit during spacecraft post-deployment activities. The identified Fourier model is computed on the ground, uplinked to the spacecraft flight computer, and the AVD controller filter coefficients are periodically computed on-board from the Fourier model. Consequently, the AVD controller formulation is based not upon pre-launch simulation model estimates but upon on-orbit nadir point attitude control and time-varying spacecraft dynamics. GOES-R high-fidelity time domain simulation results herein demonstrate the accuracy of the AVD identified Fourier model relative to the pre-launch spacecraft dynamics and control truth model. The AVD controller on-board the GOES-16 spacecraft achieves more than a ten-fold increase in structural mode damping for the fundamental solar wing mode while maintaining controller stability margins and ensuring that the nadir point attitude control bandwidth does not fall below 0.02 Hz. On-orbit GOES-16 spacecraft appendage modal frequencies and damping ratios are quantified based upon the AVD system identification, and the increase in modal damping provided by the AVD controller for each structural mode is presented. The GOES-16 spacecraft AVD controller frequency domain stability margins and nadir point attitude control bandwidth are presented along with on-orbit time domain disturbance response performance.
GOES-R Active Vibration Damping Controller Design, Implementation, and On-Orbit Performance
NASA Technical Reports Server (NTRS)
Clapp, Brian R.; Weigl, Harald J.; Goodzeit, Neil E.; Carter, Delano R.; Rood, Timothy J.
2017-01-01
GOES-R series spacecraft feature a number of flexible appendages with modal frequencies below 3.0 Hz which, if excited by spacecraft disturbances, can be sources of undesirable jitter perturbing spacecraft pointing. In order to meet GOES-R pointing stability requirements, the spacecraft flight software implements an Active Vibration Damping (AVD) rate control law which acts in parallel with the nadir point attitude control law. The AVD controller commands spacecraft reaction wheel actuators based upon Inertial Measurement Unit (IMU) inputs to provide additional damping for spacecraft structural modes below 3.0 Hz which vary with solar wing angle. A GOES-R spacecraft dynamics and attitude control system identified model is constructed from pseudo-random reaction wheel torque commands and IMU angular rate response measurements occurring over a single orbit during spacecraft post-deployment activities. The identified Fourier model is computed on the ground, uplinked to the spacecraft flight computer, and the AVD controller filter coefficients are periodically computed on-board from the Fourier model. Consequently, the AVD controller formulation is based not upon pre-launch simulation model estimates but upon on-orbit nadir point attitude control and time-varying spacecraft dynamics. GOES-R high-fidelity time domain simulation results herein demonstrate the accuracy of the AVD identified Fourier model relative to the pre-launch spacecraft dynamics and control truth model. The AVD controller on-board the GOES-16 spacecraft achieves more than a ten-fold increase in structural mode damping of the fundamental solar wing mode while maintaining controller stability margins and ensuring that the nadir point attitude control bandwidth does not fall below 0.02 Hz. On-orbit GOES-16 spacecraft appendage modal frequencies and damping ratios are quantified based upon the AVD system identification, and the increase in modal damping provided by the AVD controller for each structural mode is presented. The GOES-16 spacecraft AVD controller frequency domain stability margins and nadir point attitude control bandwidth are presented along with on-orbit time domain disturbance response performance.
A real-time ionospheric model based on GNSS Precise Point Positioning
NASA Astrophysics Data System (ADS)
Tu, Rui; Zhang, Hongping; Ge, Maorong; Huang, Guanwen
2013-09-01
This paper proposes a method of real-time monitoring and modeling the ionospheric Total Electron Content (TEC) by Precise Point Positioning (PPP). Firstly, the ionospheric TEC and receiver’s Differential Code Biases (DCB) are estimated with the undifferenced raw observation in real-time, then the ionospheric TEC model is established based on the Single Layer Model (SLM) assumption and the recovered ionospheric TEC. In this study, phase observations with high precision are directly used instead of phase smoothed code observations. In addition, the DCB estimation is separated from the establishment of the ionospheric model which will limit the impacts of the SLM assumption impacts. The ionospheric model is established at every epoch for real time application. The method is validated with three different GNSS networks on a local, regional, and global basis. The results show that the method is feasible and effective, the real-time ionosphere and DCB results are very consistent with the IGS final products, with a bias of 1-2 TECU and 0.4 ns respectively.
Ground Control Point - Wireless System Network for UAV-based environmental monitoring applications
NASA Astrophysics Data System (ADS)
Mejia-Aguilar, Abraham
2016-04-01
In recent years, Unmanned Aerial Vehicles (UAV) have seen widespread civil applications including usage for survey and monitoring services in areas such as agriculture, construction and civil engineering, private surveillance and reconnaissance services and cultural heritage management. Most aerial monitoring services require the integration of information acquired during the flight (such as imagery) with ground-based information (such as GPS information or others) for improved ground truth validation. For example, to obtain an accurate 3D and Digital Elevation Model based on aerial imagery, it is necessary to include ground-based information of coordinate points, which are normally acquired with surveying methods based on Global Position Systems (GPS). However, GPS surveys are very time consuming and especially for longer time series of monitoring data repeated GPS surveys are necessary. In order to improve speed of data collection and integration, this work presents an autonomous system based on Waspmote technologies build on single nodes interlinked in a Wireless Sensor Network (WSN) star-topology for ground based information collection and later integration with surveying data obtained by UAV. Nodes are designed to be visible from the air, to resist extreme weather conditions with low-power consumption. Besides, nodes are equipped with GPS as well as Inertial Measurement Unit (IMU), accelerometer, temperature and soil moisture sensors and thus provide significant advantages in a broad range of applications for environmental monitoring. For our purpose, the WSN transmits the environmental data with 3G/GPRS to a database on a regular time basis. This project provides a detailed case study and implementation of a Ground Control Point System Network for UAV-based vegetation monitoring of dry mountain grassland in the Matsch valley, Italy.
Improving maximum power point tracking of partially shaded photovoltaic system by using IPSO-BELBIC
NASA Astrophysics Data System (ADS)
Al-Alim El-Garhy, M. Abd; Mubarak, R. I.; El-Bably, M.
2017-08-01
Solar photovoltaic (PV) arrays in remote applications are often related to the rapid changes in the partial shading pattern. Rapid changes of the partial shading pattern make the tracking of maximum power point (MPP) of the global peak through the local ones too difficult. An essential need to make a fast and efficient algorithm to detect the peaks values which always vary as the sun irradiance changes. This paper presents two algorithms based on the improved particle swarm optimization technique one of them with PID controller (IPSO-PID), and the other one with Brain Emotional Learning Based Intelligent Controller (IPSO-BELBIC). These techniques improve the maximum power point (MPP) tracking capabilities for photovoltaic (PV) system under partial shading circumstances. The main aim of these improved algorithms is to accelerate the velocity of IPSO to reach to (MPP) and increase its efficiency. These algorithms also improve the tracking time under complex irradiance conditions. Based on these conditions, the tracking time of these presented techniques improves to 2 msec, with an efficiency of 100%.
NASA Astrophysics Data System (ADS)
Chaidee, S.; Pakawanwong, P.; Suppakitpaisarn, V.; Teerasawat, P.
2017-09-01
In this work, we devise an efficient method for the land-use optimization problem based on Laguerre Voronoi diagram. Previous Voronoi diagram-based methods are more efficient and more suitable for interactive design than discrete optimization-based method, but, in many cases, their outputs do not satisfy area constraints. To cope with the problem, we propose a force-directed graph drawing algorithm, which automatically allocates generating points of Voronoi diagram to appropriate positions. Then, we construct a Laguerre Voronoi diagram based on these generating points, use linear programs to adjust each cell, and reconstruct the diagram based on the adjustment. We adopt the proposed method to the practical case study of Chiang Mai University's allocated land for a mixed-use complex. For this case study, compared to other Voronoi diagram-based method, we decrease the land allocation error by 62.557 %. Although our computation time is larger than the previous Voronoi-diagram-based method, it is still suitable for interactive design.
Electronic Learning Systems in Hong Kong Business Organizations: A Study of Early and Late Adopters
ERIC Educational Resources Information Center
Chan, Simon C. H.; Ngai, Eric W. T.
2012-01-01
Based on the diffusion of innovation theory (E. M. Rogers, 1983, 1995), the authors examined the antecedents of the adoption of electronic learning (e-learning) systems by using a time-based assessment model (R. C. Beatty, J. P. Shim, & M. C. Jones, 2001), which classified adopters into categories upon point in time when adopting e-learning…
Mayerhoefer, Marius E; Giraudo, Chiara; Senn, Daniela; Hartenbach, Markus; Weber, Michael; Rausch, Ivo; Kiesewetter, Barbara; Herold, Christian J; Hacker, Marcus; Pones, Matthias; Simonitsch-Klupp, Ingrid; Müllauer, Leonhard; Dolak, Werner; Lukas, Julius; Raderer, Markus
2016-02-01
To determine whether in patients with extranodal marginal zone B-cell lymphoma of the mucosa-associated lymphoid tissue lymphoma (MALT), delayed-time-point 2-F-fluoro-2-deoxy-d-glucose-positron emission tomography (F-FDG-PET) performs better than standard-time-point F-FDG-PET. Patients with untreated histologically verified MALT lymphoma, who were undergoing pretherapeutic F-FDG-PET/computed tomography (CT) and consecutive F-FDG-PET/magnetic resonance imaging (MRI), using a single F-FDG injection, in the course of a larger-scale prospective trial, were included. Region-based sensitivity and specificity, and patient-based sensitivity of the respective F-FDG-PET scans at time points 1 (45-60 minutes after tracer injection, TP1) and 2 (100-150 minutes after tracer injection, TP2), relative to the reference standard, were calculated. Lesion-to-liver and lesion-to-blood SUVmax (maximum standardized uptake values) ratios were also assessed. F-FDG-PET at TP1 was true positive in 15 o f 23 involved regions, and F-FDG-PET at TP2 was true-positive in 20 of 23 involved regions; no false-positive regions were noted. Accordingly, region-based sensitivities and specificities were 65.2% (confidence interval [CI], 45.73%-84.67%) and 100% (CI, 100%-100%) for F-FDG-PET at TP1; and 87.0% (CI, 73.26%-100%) and 100% (CI, 100%-100%) for F-FDG-PET at TP2, respectively. FDG-PET at TP1 detected lymphoma in at least one nodal or extranodal region in 7 of 13 patients, and F-FDG-PET at TP2 in 10 of 13 patients; accordingly, patient-based sensitivity was 53.8% (CI, 26.7%-80.9%) for F-FDG-PET at TP1, and 76.9% (CI, 54.0%-99.8%) for F-FDG-PET at TP2. Lesion-to-liver and lesion-to-blood maximum standardized uptake value ratios were significantly lower at TP1 (ratios, 1.05 ± 0.40 and 1.52 ± 0.62) than at TP2 (ratios, 1.67 ± 0.74 and 2.56 ± 1.10; P = 0.003 and P = 0.001). Delayed-time-point imaging may improve F-FDG-PET in MALT lymphoma.
Instance-based learning: integrating sampling and repeated decisions from experience.
Gonzalez, Cleotilde; Dutt, Varun
2011-10-01
In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or lose money. In the repeated-choice paradigm, participants select 1 of the 2 options for a fixed number of times and receive immediate outcome feedback that affects their earnings. These 2 experimental paradigms have been studied independently, and different cognitive processes have often been assumed to take place in each, as represented in widely diverse computational models. We demonstrate that behavior in these 2 paradigms relies upon common cognitive processes proposed by the instance-based learning theory (IBLT; Gonzalez, Lerch, & Lebiere, 2003) and that the stopping point is the only difference between the 2 paradigms. A single cognitive model based on IBLT (with an added stopping point rule in the sampling paradigm) captures human choices and predicts the sequence of choice selections across both paradigms. We integrate the paradigms through quantitative model comparison, where IBLT outperforms the best models created for each paradigm separately. We discuss the implications for the psychology of decision making. © 2011 American Psychological Association
Integrated evaluation of visually induced motion sickness in terms of autonomic nervous regulation.
Kiryu, Tohru; Tada, Gen; Toyama, Hiroshi; Iijima, Atsuhiko
2008-01-01
To evaluate visually-induced motion sickness, we integrated subjective and objective responses in terms of autonomic nervous regulation. Twenty-seven subjects viewed a 2-min-long first-person-view video section five times (total 10 min) continuously. Measured biosignals, the RR interval, respiration, and blood pressure, were used to estimate the indices related to autonomic nervous activity (ANA). Then we determined the trigger points and some sensation sections based on the time-varying behavior of ANA-related indices. We found that there was a suitable combination of biosignals to present the symptoms of visually-induced motion sickness. Based on the suitable combination, integrating trigger points and subjective scores allowed us to represent the time-distribution of subjective responses during visual exposure, and helps us to understand what types of camera motions will cause visually-induced motion sickness.
Sun, Chenglu; Li, Wei; Chen, Wei
2017-01-01
For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array. PMID:28796188
Simulation and experiment of a fuzzy logic based MPPT controller for a small wind turbine system
NASA Astrophysics Data System (ADS)
Petrila, Diana; Muntean, Nicolae
2012-09-01
This paper describes the development of a fuzzy logic based maximum power point tracking (MPPT) strategy for a variable speed wind turbine system (VSWT). For this scope, a fuzzy logic controller (FLC) was described, simulated and tested on a real time "hardware in the loop" wind turbine emulator. Simulation and experimental results show that the controller is able to track the maximum power point for various wind conditions and validate the proposed control strategy.
An interpretation model of GPR point data in tunnel geological prediction
NASA Astrophysics Data System (ADS)
He, Yu-yao; Li, Bao-qi; Guo, Yuan-shu; Wang, Teng-na; Zhu, Ya
2017-02-01
GPR (Ground Penetrating Radar) point data plays an absolutely necessary role in the tunnel geological prediction. However, the research work on the GPR point data is very little and the results does not meet the actual requirements of the project. In this paper, a GPR point data interpretation model which is based on WD (Wigner distribution) and deep CNN (convolutional neural network) is proposed. Firstly, the GPR point data is transformed by WD to get the map of time-frequency joint distribution; Secondly, the joint distribution maps are classified by deep CNN. The approximate location of geological target is determined by observing the time frequency map in parallel; Finally, the GPR point data is interpreted according to the classification results and position information from the map. The simulation results show that classification accuracy of the test dataset (include 1200 GPR point data) is 91.83% at the 200 iteration. Our model has the advantages of high accuracy and fast training speed, and can provide a scientific basis for the development of tunnel construction and excavation plan.
A new method for mapping multidimensional data to lower dimensions
NASA Technical Reports Server (NTRS)
Gowda, K. C.
1983-01-01
A multispectral mapping method is proposed which is based on the new concept of BEND (Bidimensional Effective Normalised Difference). The method, which involves taking one sample point at a time and finding the interrelationships between its features, is found very economical from the point of view of storage and processing time. It has good dimensionality reduction and clustering properties, and is highly suitable for computer analysis of large amounts of data. The transformed values obtained by this procedure are suitable for either a planar 2-space mapping of geological sample points or for making grayscale and color images of geo-terrains. A few examples are given to justify the efficacy of the proposed procedure.
The fast multipole method and point dipole moment polarizable force fields.
Coles, Jonathan P; Masella, Michel
2015-01-14
We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.
Garcia-Roig, Michael; Ridley, Derrick E; McCracken, Courtney; Arlen, Angela M; Cooper, Christopher S; Kirsch, Andrew J
2017-04-01
The Vesicoureteral Reflux Index is a validated tool that reliably predicts spontaneous resolution of reflux or at least 2 grades of improvement for patients diagnosed before age 24 months. We evaluated the Vesicoureteral Reflux Index in children older than 2 years. Patients younger than 18 years who were diagnosed with primary vesicoureteral reflux after age 24 months and had undergone 2 or more voiding cystourethrograms were identified. Disease severity was scored using the Vesicoureteral Reflux Index, a 6-point scale based on gender, reflux grade, ureteral abnormalities and reflux timing. Proportional subdistribution hazard models for competing risks identified variables associated with resolution/improvement at different time points. A total of 21 males and 250 females met inclusion criteria. Mean ± SD age was 4.0 ± 2.1 years and patients had a median vesicoureteral reflux grade of 2. The Vesicoureteral Reflux Index score improved by 1 point in 1 patient (100%), 2 points in 25 (67.6%), 3 points in 48 (37%), 4 points in 18 (21.4%) and 5 to 6 points in 4 (18.2%). Female gender (p = 0.005) and vesicoureteral reflux timing (late filling, p = 0.002; early/mid filling, p <0.001) independently predicted nonresolution. Median resolution time based on Vesicoureteral Reflux Index score was 2 months or less in 15.6% of patients (95% CI 11.0-13.8), 3 months in 34.7% (95% CI 25.4-44.1), 4 months in 55.9% (95% CI 40.1 to infinity) and 5 months or more in 30.3% (95% CI 29.5 to infinity). High grade (IV or V) reflux was not associated with resolution at any point. Ureteral abnormalities were associated with lack of resolution in the first 12 to 18 months (HR 0.29, 95% CI 0.29-0.80) but not in later followup. Vesicoureteral Reflux Index scores of 3, 4 and 5 were significantly associated with lack of resolution/improvement compared to scores of 2 or less (p = 0.031). The Vesicoureteral Reflux Index reliably predicts primary vesicoureteral reflux improvement/resolution in children diagnosed after age 24 months. Spontaneous resolution/improvement is less likely as Vesicoureteral Reflux Index score and time from diagnosis increase. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor
Tayara, Hilal; Ham, Woonchul; Chong, Kil To
2016-01-01
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714
Fast intersection detection algorithm for PC-based robot off-line programming
NASA Astrophysics Data System (ADS)
Fedrowitz, Christian H.
1994-11-01
This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.
Tayara, Hilal; Ham, Woonchul; Chong, Kil To
2016-12-15
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.
Kinect based real-time position calibration for nasal endoscopic surgical navigation system
NASA Astrophysics Data System (ADS)
Fan, Jingfan; Yang, Jian; Chu, Yakui; Ma, Shaodong; Wang, Yongtian
2016-03-01
Unanticipated, reactive motion of the patient during skull based tumor resective surgery is the source of the consequence that the nasal endoscopic tracking system is compelled to be recalibrated. To accommodate the calibration process with patient's movement, this paper developed a Kinect based Real-time positional calibration method for nasal endoscopic surgical navigation system. In this method, a Kinect scanner was employed as the acquisition part of the point cloud volumetric reconstruction of the patient's head during surgery. Then, a convex hull based registration algorithm aligned the real-time image of the patient head with a model built upon the CT scans performed in the preoperative preparation to dynamically calibrate the tracking system if a movement was detected. Experimental results confirmed the robustness of the proposed method, presenting a total tracking error within 1 mm under the circumstance of relatively violent motions. These results point out the tracking accuracy can be retained stably and the potential to expedite the calibration of the tracking system against strong interfering conditions, demonstrating high suitability for a wide range of surgical applications.
Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher
2012-01-01
Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.
Terrain modeling for real-time simulation
NASA Astrophysics Data System (ADS)
Devarajan, Venkat; McArthur, Donald E.
1993-10-01
There are many applications, such as pilot training, mission rehearsal, and hardware-in-the- loop simulation, which require the generation of realistic images of terrain and man-made objects in real-time. One approach to meeting this requirement is to drape photo-texture over a planar polygon model of the terrain. The real time system then computes, for each pixel of the output image, the address in a texture map based on the intersection of the line-of-sight vector with the terrain model. High quality image generation requires that the terrain be modeled with a fine mesh of polygons while hardware costs limit the number of polygons which may be displayed for each scene. The trade-off between these conflicting requirements must be made in real-time because it depends on the changing position and orientation of the pilot's eye point or simulated sensor. The traditional approach is to develop a data base consisting of multiple levels of detail (LOD), and then selecting for display LODs as a function of range. This approach could lead to both anomalies in the displayed scene and inefficient use of resources. An approach has been developed in which the terrain is modeled with a set of nested polygons and organized as a tree with each node corresponding to a polygon. This tree is pruned to select the optimum set of nodes for each eye-point position. As the point of view moves, the visibility of some nodes drops below the limit of perception and may be deleted while new points must be added in regions near the eye point. An analytical model has been developed to determine the number of polygons required for display. This model leads to quantitative performance measures of the triangulation algorithm which is useful for optimizing system performance with a limited display capability.
NASA Astrophysics Data System (ADS)
Chen, Chun-Chi; Lin, Shih-Hao; Lin, Yi
2014-06-01
This paper proposes a time-domain CMOS smart temperature sensor featuring on-chip curvature correction and one-point calibration support for thermal management systems. Time-domain inverter-based temperature sensors, which exhibit the advantages of low power and low cost, have been proposed for on-chip thermal monitoring. However, the curvature is large for the thermal transfer curve, which substantially affects the accuracy as the temperature range increases. Another problem is that the inverter is sensitive to process variations, resulting in difficulty for the sensors to achieve an acceptable accuracy for one-point calibration. To overcome these two problems, a temperature-dependent oscillator with curvature correction is proposed to increase the linearity of the oscillatory width, thereby resolving the drawback caused by a costly off-chip second-order master curve fitting. For one-point calibration support, an adjustable-gain time amplifier was adopted to eliminate the effect of process variations, with the assistance of a calibration circuit. The proposed circuit occupied a small area of 0.073 mm2 and was fabricated in a TSMC CMOS 0.35-μm 2P4M digital process. The linearization of the oscillator and the effect cancellation of process variations enabled the sensor, which featured a fixed resolution of 0.049 °C/LSB, to achieve an optimal inaccuracy of -0.8 °C to 1.2 °C after one-point calibration of 12 test chips from -40 °C to 120 °C. The power consumption was 35 μW at a sample rate of 10 samples/s.
Effect of Receiver Choosing on Point Positions Determination in Network RTK
NASA Astrophysics Data System (ADS)
Bulbul, Sercan; Inal, Cevat
2016-04-01
Nowadays, the developments in GNSS technique allow to determinate point positioning in real time. Initially, point positioning was determined by RTK (Real Time Kinematic) based on a reference station. But, to avoid systematic errors in this method, distance between the reference points and rover receiver must be shorter than10 km. To overcome this restriction in RTK method, the idea of setting more than one reference point had been suggested and, CORS (Continuously Operations Reference Systems) was put into practice. Today, countries like ABD, Germany, Japan etc. have set CORS network. CORS-TR network which has 146 reference points has also been established in 2009 in Turkey. In CORS-TR network, active CORS approach was adopted. In Turkey, CORS-TR reference stations covering whole country are interconnected and, the positions of these stations and atmospheric corrections are continuously calculated. In this study, in a selected point, RTK measurements based on CORS-TR, were made with different receivers (JAVAD TRIUMPH-1, TOPCON Hiper V, MAGELLAN PRoMark 500, PENTAX SMT888-3G, SATLAB SL-600) and with different correction techniques (VRS, FKP, MAC). In the measurements, epoch interval was taken as 5 seconds and measurement time as 1 hour. According to each receiver and each correction technique, means and differences between maximum and minimum values of measured coordinates, root mean squares in the directions of coordinate axis and 2D and 3D positioning precisions were calculated, the results were evaluated by statistical methods and the obtained graphics were interpreted. After evaluation of the measurements and calculations, for each receiver and each correction technique; the coordinate differences between maximum and minimum values were measured to be less than 8 cm, root mean squares in coordinate axis directions less than ±1.5 cm, 2D point positioning precisions less than ±1.5 cm and 3D point positioning precisions less than ±1.5 cm. In the measurement point, it has been concluded that VRS correction technique is generally better than other corrections techniques.
Overview of fast algorithm in 3D dynamic holographic display
NASA Astrophysics Data System (ADS)
Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian
2013-08-01
3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.
Complementing Operating Room Teaching With Video-Based Coaching.
Hu, Yue-Yung; Mazer, Laura M; Yule, Steven J; Arriaga, Alexander F; Greenberg, Caprice C; Lipsitz, Stuart R; Gawande, Atul A; Smink, Douglas S
2017-04-01
Surgical expertise demands technical and nontechnical skills. Traditionally, surgical trainees acquired these skills in the operating room; however, operative time for residents has decreased with duty hour restrictions. As in other professions, video analysis may help maximize the learning experience. To develop and evaluate a postoperative video-based coaching intervention for residents. In this mixed methods analysis, 10 senior (postgraduate year 4 and 5) residents were videorecorded operating with an attending surgeon at an academic tertiary care hospital. Each video formed the basis of a 1-hour one-on-one coaching session conducted by the operative attending; although a coaching framework was provided, participants determined the specific content collaboratively. Teaching points were identified in the operating room and the video-based coaching sessions; iterative inductive coding, followed by thematic analysis, was performed. Teaching points made in the operating room were compared with those in the video-based coaching sessions with respect to initiator, content, and teaching technique, adjusting for time. Among 10 cases, surgeons made more teaching points per unit time (63.0 vs 102.7 per hour) while coaching. Teaching in the video-based coaching sessions was more resident centered; attendings were more inquisitive about residents' learning needs (3.30 vs 0.28, P = .04), and residents took more initiative to direct their education (27% [198 of 729 teaching points] vs 17% [331 of 1977 teaching points], P < .001). Surgeons also more frequently validated residents' experiences (8.40 vs 1.81, P < .01), and they tended to ask more questions to promote critical thinking (9.30 vs 3.32, P = .07) and set more learning goals (2.90 vs 0.28, P = .11). More complex topics, including intraoperative decision making (mean, 9.70 vs 2.77 instances per hour, P = .03) and failure to progress (mean, 1.20 vs 0.13 instances per hour, P = .04) were addressed, and they were more thoroughly developed and explored. Excerpts of dialogue are presented to illustrate these findings. Video-based coaching is a novel and feasible modality for supplementing intraoperative learning. Objective evaluation demonstrates that video-based coaching may be particularly useful for teaching higher-level concepts, such as decision making, and for individualizing instruction and feedback to each resident.
Approximation methods for combined thermal/structural design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Shore, C. P.
1979-01-01
Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.
Jern, Patrick; Hakala, Outi; Kärnä, Antti; Gunst, Annika
2018-04-01
The aim of the present study was to investigate how women's tendency to pretend orgasm during intercourse is associated with orgasm function and intercourse-related pain, using a longitudinal design where temporal stability and possible causal relationships could be modeled. The study sample consisted of 1421 Finnish women who had participated in large-scale population-based data collections conducted at two time points 7 years apart. Pretending orgasm was assessed for the past 4 weeks, and orgasm function and pain were assessed using the Female Sexual Function Index for the past 4 weeks. Associations were also computed separately in three groups of women based on relationship status. Pretending orgasm was considerably variable over time, with 34% of the women having pretended orgasm a few times or more at least at one time point, and 11% having done so at both time points. Initial bivariate correlations revealed associations between pretending orgasm and orgasm problems within and across time, whereas associations with pain were more ambiguous. However, we found no support in the path model for the leading hypotheses that pretending orgasms would predict pain or orgasm problems over a long period of time, or that pain or orgasm problems would predict pretending orgasm. The strongest predictor of future pretending in our model was previous pretending (R 2 = .14). Relationship status did not seem to affect pretending orgasm in any major way.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganjeh, E., E-mail: navidganjehie@sina.kntu.ac.ir; Sarkhosh, H.; Bajgholi, M.E.
Microstructural features developed along with mechanical properties in furnace brazing of Ti-6Al-4V alloy using STEMET 1228 (Ti-26.8Zr-13Ni-13.9Cu, wt.%) and STEMET 1406 (Zr-9.7Ti-12.4Ni-11.2Cu, wt.%) amorphous filler alloys. Brazing temperatures employed were 900-950 Degree-Sign C for the titanium-based filler and 900-990 Degree-Sign C for the zirconium-based filler alloys, respectively. The brazing time durations were 600, 1200 and 1800 s. The brazed joints were evaluated by ultrasonic test, and their microstructures and phase constitutions analyzed by metallography, scanning electron microscopy and X-ray diffraction analysis. Since microstructural evolution across the furnace brazed joints primarily depends on their alloying elements such as Cu, Ni andmore » Zr along the joint. Accordingly, existence of Zr{sub 2}Cu, Ti{sub 2}Cu and (Ti,Zr){sub 2}Ni intermetallic compounds was identified in the brazed joints. The chemical composition of segregation region in the center of brazed joints was identical to virgin filler alloy content which greatly deteriorated the shear strength of the joints. Adequate brazing time (1800 s) and/or temperature (950 Degree-Sign C for Ti-based and 990 Degree-Sign C for Zr-based) resulted in an acicular Widmanstaetten microstructure throughout the entire joint section due to eutectoid reaction. This microstructure increased the shear strength of the brazed joints up to the Ti-6Al-4V tensile strength level. Consequently, Ti-6Al-4V can be furnace brazed by Ti and Zr base foils produced excellent joint strengths. - Highlights: Black-Right-Pointing-Pointer Temperature or time was the main factors of controlling braze joint strength. Black-Right-Pointing-Pointer Developing a Widmanstaetten microstructure generates equal strength to base metal. Black-Right-Pointing-Pointer Brittle intermetallic compounds like (Ti,Zr){sub 2}Ni/Cu deteriorate shear strength. Black-Right-Pointing-Pointer Ti and Zr base filler alloys were the best choice for brazing Ti-6Al-4V.« less
a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data
NASA Astrophysics Data System (ADS)
Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.
2015-04-01
Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.
[Proposal of a costing method for the provision of sterilization in a public hospital].
Bauler, S; Combe, C; Piallat, M; Laurencin, C; Hida, H
2011-07-01
To refine the billing to institutions whose operations of sterilization are outsourced, a sterilization cost approach was developed. The aim of the study is to determine the value of a sterilization unit (one point "S") evolving according to investments, quantities processed, types of instrumentation or packaging. The time of preparation has been selected from all sub-processes of sterilization to determine the value of one point S. The time of preparation of sterilized large and small containers and pouches were raised. The reference time corresponds to one bag (equal to one point S). Simultaneously, the annual operating cost of sterilization was defined and divided into several areas of expenditure: employees, equipments and building depreciation, supplies, and maintenance. A total of 136 crossing times of containers were measured. Time to prepare a pouch has been estimated at one minute (one S). A small container represents four S and a large container represents 10S. By dividing the operating cost of sterilization by the total number of points of sterilization over a given period, the cost of one S can be determined. This method differs from traditional costing method in sterilizing services, considering each item of expenditure. This point S will be the base for billing of subcontracts to other institutions. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Hilton, G; Unsworth, C A; Murphy, G C; Browne, M; Olver, J
2017-08-01
Longitudinal cohort design. First, to explore the longitudinal outcomes for people who received early intervention vocational rehabilitation (EIVR); second, to examine the nature and extent of relationships between contextual factors and employment outcomes over time. Both inpatient and community-based clients of a Spinal Community Integration Service (SCIS). People of workforce age undergoing inpatient rehabilitation for traumatic spinal cord injury were invited to participate in EIVR as part of SCIS. Data were collected at the following three time points: discharge and at 1 year and 2+ years post discharge. Measures included the spinal cord independence measure, hospital anxiety and depression scale, impact on participation and autonomy scale, numerical pain-rating scale and personal wellbeing index. A range of chi square, correlation and regression tests were undertaken to look for relationships between employment outcomes and demographic, emotional and physical characteristics. Ninety-seven participants were recruited and 60 were available at the final time point where 33% (95% confidence interval (CI): 24-42%) had achieved an employment outcome. Greater social participation was strongly correlated with wellbeing (ρ=0.692), and reduced anxiety (ρ=-0.522), depression (ρ=-0.643) and pain (ρ=-0.427) at the final time point. In a generalised linear mixed effect model, education status, relationship status and subjective wellbeing increased significantly the odds of being employed at the final time point. Tertiary education prior to injury was associated with eight times increased odds of being in employment at the final time point; being in a relationship at the time of injury was associated with increased odds of being in employment of more than 3.5; subjective wellbeing, while being the least powerful predictor was still associated with increased odds (1.8 times) of being employed at the final time point. EIVR shows promise in delivering similar return-to-work rates as those traditionally reported, but sooner. The dynamics around relationships, subjective wellbeing, social participation and employment outcomes require further exploration.
76 FR 2665 - Caribbean Fishery Management Council; Scoping Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-14
... time series of catch data that is considered to be consistently reliable across all islands as defined... based on what the Council considers to be the longest time series of catch data that is consistently... preferred management reference point time series. Action 3b. Recreational Bag Limits Option 1: No action. Do...
A novel mesh processing based technique for 3D plant analysis
2012-01-01
Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features. PMID:22553969
NASA Astrophysics Data System (ADS)
Zamani, Pooria; Kayvanrad, Mohammad; Soltanian-Zadeh, Hamid
2012-12-01
This article presents a compressive sensing approach for reducing data acquisition time in cardiac cine magnetic resonance imaging (MRI). In cardiac cine MRI, several images are acquired throughout the cardiac cycle, each of which is reconstructed from the raw data acquired in the Fourier transform domain, traditionally called k-space. In the proposed approach, a majority, e.g., 62.5%, of the k-space lines (trajectories) are acquired at the odd time points and a minority, e.g., 37.5%, of the k-space lines are acquired at the even time points of the cardiac cycle. Optimal data acquisition at the even time points is learned from the data acquired at the odd time points. To this end, statistical features of the k-space data at the odd time points are clustered by fuzzy c-means and the results are considered as the states of Markov chains. The resulting data is used to train hidden Markov models and find their transition matrices. Then, the trajectories corresponding to transition matrices far from an identity matrix are selected for data acquisition. At the end, an iterative thresholding algorithm is used to reconstruct the images from the under-sampled k-space datasets. The proposed approaches for selecting the k-space trajectories and reconstructing the images generate more accurate images compared to alternative methods. The proposed under-sampling approach achieves an acceleration factor of 2 for cardiac cine MRI.
Brownian Motion of Boomerang Colloidal Particles
NASA Astrophysics Data System (ADS)
Wei, Qi-Huo; Konya, Andrew; Wang, Feng; Selinger, Jonathan V.; Sun, Kai; Chakrabarty, Ayan
2014-03-01
We present experimental and theoretical studies on the Brownian motion of boomerang colloidal particles confined between two glass plates. Our experimental observations show that the mean displacements are biased towards the center of hydrodynamic stress (CoH), and that the mean-square displacements exhibit a crossover from short-time faster to long-time slower diffusion with the short-time diffusion coefficients dependent on the points used for tracking. A model based on Langevin theory elucidates that these behaviors are ascribed to the superposition of two diffusive modes: the ellipsoidal motion of the CoH and the rotational motion of the tracking point with respect to the CoH.
Nation Before Service: The Evolution of Joint Operations to a Capabilities-Based Mindset
2013-06-01
Identified as one of America’s acupuncture points, cyberspace represents the soft underbelly that facilitates a majority of the world and US economy.90...Corpus, “America’s Acupuncture Points,” Asia Times Online (20 October 2006), http://www.atimes.com/atimes/China/HJ19Ad01.html (accessed 28 December...2012). See also, Zimet and Barry, “Military Service,” 287. 91 Corpus, “America’s Acupuncture Points,” 1. 62 and futurist Ray Kurzweil postulated
1981-10-01
a balance was drawn between experimental considerations (e.g., pretests and posttests ) and training process considerations (e.g., available time and...Station 23 4 Instructor’s Checkoff List 24 5 Port Approach Area 26 6 Training Unit Schedule 28 7 Pretest / Posttest Comparison: CPA - Kings Point Group A (Day...39 a Pretest / Posttest Comparison: Number of Radar Requests - Kings Point Group A (Day) 41 9 Input Characteristic Range Master Notified, Kings Point
Malik, Azhar H; Shimazoe, Kenji; Takahashi, Hiroyuki
2013-01-01
In order to obtain plasma time activity curve (PTAC), input function for almost all quantitative PET studies, patient blood is sampled manually from the artery or vein which has various drawbacks. Recently a novel compact Time over Threshold (ToT) based Pr:LuAG-APD animal PET tomograph is developed in our laboratory which has 10% energy resolution, 4.2 ns time resolution and 1.76 mm spatial resolution. The measured value of spatial resolution shows much promise for imaging the blood vascular, i.e; artery of diameter 2.3-2.4mm, and hence, to measure PTAC for quantitative PET studies. To find the measurement time required to obtain reasonable counts for image reconstruction, the most important parameter is the sensitivity of the system. Usually small animal PET systems are characterized by using a point source in air. We used Electron Gamma Shower 5 (EGS5) code to simulate a point source at different positions inside the sensitive volume of tomograph and the axial and radial variations in the sensitivity are studied in air and phantom equivalent water cylinder. An average sensitivity difference of 34% in axial direction and 24.6% in radial direction is observed when point source is displaced inside water cylinder instead of air.
Coping capacities for improving adaptation pathways for flood protection in Can Tho, Vietnam
NASA Astrophysics Data System (ADS)
Pathirana, A.; Radhakrishnan, M.; Quan, N. H.; Gersonius, B.; Ashley, R.; Zevenbergen, C.
2016-12-01
Studying the evolution of coping and adaptation capacities is a prerequisite for preparing an effective flood management plan for the future, especially in the dynamic and fast changing cities of developing countries. The objectives, requirements, targets, design and performance of flood protection measures will have to be determined after taking into account, or in conjunction with, the coping capacities. A methodology is presented based on adaptation pathways to account for coping capacities and to assess the effect on flood protection measures. The adaptation pathways method determines the point of failure of a particular strategy based on the change in an external driver, a point in time or a socio economic situation where / at which the strategy can no longer meet its objective. Pathways arrived at based on this methodology reflect future reality by considering changing engineering standards along with future uncertainties, risk taking abilities and adaptation capacities. This pathways based methodology determines the Adaptation tipping points (ATP), `time of occurrence of ATP' of flood protection measures after accounting for coping capacities, evaluates the measures and then provides the means to determine the adaptation pathways. Application of this methodology for flood protection measures in Can Tho city in the Mekong delta reveals the effect of coping capacity on the usefulness of flood protection measures and the delay in occurrence of tipping points. Consideration of coping capacity in the system owing to elevated property floor levels lead to the postponement of tipping points and improved the adaptation pathways comprising flood protection measures such as dikes. This information is useful to decision makers for planning and phasing of investments in flood protection.
A novel in vitro image-based assay identifies new drug leads for giardiasis.
Hart, Christopher J S; Munro, Taylah; Andrews, Katherine T; Ryan, John H; Riches, Andrew G; Skinner-Adams, Tina S
2017-04-01
Giardia duodenalis is an intestinal parasite that causes giardiasis, a widespread human gastrointestinal disease. Treatment of giardiasis relies on a small arsenal of compounds that can suffer from limitations including side-effects, variable treatment efficacy and parasite drug resistance. Thus new anti-Giardia drug leads are required. The search for new compounds with anti-Giardia activity currently depends on assays that can be labour-intensive, expensive and restricted to measuring activity at a single time-point. Here we describe a new in vitro assay to assess anti-Giardia activity. This image-based assay utilizes the Perkin-Elmer Operetta ® and permits automated assessment of parasite growth at multiple time points without cell-staining. Using this new approach, we assessed the "Malaria Box" compound set for anti-Giardia activity. Three compounds with sub-μM activity (IC 50 0.6-0.9 μM) were identified as potential starting points for giardiasis drug discovery. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Real-time volcano monitoring using GNSS single-frequency receivers
NASA Astrophysics Data System (ADS)
Lee, Seung-Woo; Yun, Sung-Hyo; Kim, Do Hyeong; Lee, Dukkee; Lee, Young J.; Schutz, Bob E.
2015-12-01
We present a real-time volcano monitoring strategy that uses the Global Navigation Satellite System (GNSS), and we examine the performance of the strategy by processing simulated and real data and comparing the results with published solutions. The cost of implementing the strategy is reduced greatly by using single-frequency GNSS receivers except for one dual-frequency receiver that serves as a base receiver. Positions of the single-frequency receivers are computed relative to the base receiver on an epoch-by-epoch basis using the high-rate double-difference (DD) GNSS technique, while the position of the base station is fixed to the values obtained with a deferred-time precise point positioning technique and updated on a regular basis. Since the performance of the single-frequency high-rate DD technique depends on the conditions of the ionosphere over the monitoring area, the ionospheric total electron content is monitored using the dual-frequency data from the base receiver. The surface deformation obtained with the high-rate DD technique is eventually processed by a real-time inversion filter based on the Mogi point source model. The performance of the real-time volcano monitoring strategy is assessed through a set of tests and case studies, in which the data recorded during the 2007 eruption of Kilauea and the 2005 eruption of Augustine are processed in a simulated real-time mode. The case studies show that the displacement time series obtained with the strategy seem to agree with those obtained with deferred-time, dual-frequency approaches at the level of 10-15 mm. Differences in the estimated volume change of the Mogi source between the real-time inversion filter and previously reported works were in the range of 11 to 13% of the maximum volume changes of the cases examined.
Reproducibility of UAV-based earth surface topography based on structure-from-motion algorithms.
NASA Astrophysics Data System (ADS)
Clapuyt, François; Vanacker, Veerle; Van Oost, Kristof
2014-05-01
A representation of the earth surface at very high spatial resolution is crucial to accurately map small geomorphic landforms with high precision. Very high resolution digital surface models (DSM) can then be used to quantify changes in earth surface topography over time, based on differencing of DSMs taken at various moments in time. However, it is compulsory to have both high accuracy for each topographic representation and consistency between measurements over time, as DSM differencing automatically leads to error propagation. This study investigates the reproducibility of reconstructions of earth surface topography based on structure-from-motion (SFM) algorithms. To this end, we equipped an eight-propeller drone with a standard reflex camera. This equipment can easily be deployed in the field, as it is a lightweight, low-cost system in comparison with classic aerial photo surveys and terrestrial or airborne LiDAR scanning. Four sets of aerial photographs were created for one test field. The sets of airphotos differ in focal length, and viewing angles, i.e. nadir view and ground-level view. In addition, the importance of the accuracy of ground control points for the construction of a georeferenced point cloud was assessed using two different GPS devices with horizontal accuracy at resp. the sub-meter and sub-decimeter level. Airphoto datasets were processed with SFM algorithm and the resulting point clouds were georeferenced. Then, the surface representations were compared with each other to assess the reproducibility of the earth surface topography. Finally, consistency between independent datasets is discussed.
Park, Chihyun; Yun, So Jeong; Ryu, Sung Jin; Lee, Soyoung; Lee, Young-Sam; Yoon, Youngmi; Park, Sang Chul
2017-03-15
Cellular senescence irreversibly arrests growth of human diploid cells. In addition, recent studies have indicated that senescence is a multi-step evolving process related to important complex biological processes. Most studies analyzed only the genes and their functions representing each senescence phase without considering gene-level interactions and continuously perturbed genes. It is necessary to reveal the genotypic mechanism inferred by affected genes and their interaction underlying the senescence process. We suggested a novel computational approach to identify an integrative network which profiles an underlying genotypic signature from time-series gene expression data. The relatively perturbed genes were selected for each time point based on the proposed scoring measure denominated as perturbation scores. Then, the selected genes were integrated with protein-protein interactions to construct time point specific network. From these constructed networks, the conserved edges across time point were extracted for the common network and statistical test was performed to demonstrate that the network could explain the phenotypic alteration. As a result, it was confirmed that the difference of average perturbation scores of common networks at both two time points could explain the phenotypic alteration. We also performed functional enrichment on the common network and identified high association with phenotypic alteration. Remarkably, we observed that the identified cell cycle specific common network played an important role in replicative senescence as a key regulator. Heretofore, the network analysis from time series gene expression data has been focused on what topological structure was changed over time point. Conversely, we focused on the conserved structure but its context was changed in course of time and showed it was available to explain the phenotypic changes. We expect that the proposed method will help to elucidate the biological mechanism unrevealed by the existing approaches.
The Stability of Perceived Pubertal Timing across Adolescence
Cance, Jessica Duncan; Ennett, Susan T.; Morgan-Lopez, Antonio A.; Foshee, Vangie A.
2011-01-01
It is unknown whether perceived pubertal timing changes as puberty progresses or whether it is an important component of adolescent identity formation that is fixed early in pubertal development. The purpose of this study is to examine the stability of perceived pubertal timing among a school-based sample of rural adolescents aged 11 to 17 (N=6,425; 50% female; 53% White). Two measures of pubertal timing were used, stage-normative, based on the Pubertal Development Scale, a self-report scale of secondary sexual characteristics, and peer-normative, a one-item measure of perceived pubertal timing. Two longitudinal methods were used: one-way random effects ANOVA models and latent class analysis. When calculating intraclass correlation coefficients using the one-way random effects ANOVA models, which is based on the average reliability from one time point to the next, both measures had similar, but poor, stability. In contrast, latent class analysis, which looks at the longitudinal response pattern of each individual and treats deviation from that pattern as measurement error, showed three stable and distinct response patterns for both measures: always early, always on-time, and always late. Study results suggest instability in perceived pubertal timing from one age to the next, but this instability is likely due to measurement error. Thus, it may be necessary to take into account the longitudinal pattern of perceived pubertal timing across adolescence rather than measuring perceived pubertal timing at one point in time. PMID:21983873
The algorithm of fast image stitching based on multi-feature extraction
NASA Astrophysics Data System (ADS)
Yang, Chunde; Wu, Ge; Shi, Jing
2018-05-01
This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.
Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger
2013-01-01
A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.
Quantum Field Theory on Spacetimes with a Compactly Generated Cauchy Horizon
NASA Astrophysics Data System (ADS)
Kay, Bernard S.; Radzikowski, Marek J.; Wald, Robert M.
1997-02-01
We prove two theorems which concern difficulties in the formulation of the quantum theory of a linear scalar field on a spacetime, (M,g_{ab}), with a compactly generated Cauchy horizon. These theorems demonstrate the breakdown of the theory at certain base points of the Cauchy horizon, which are defined as 'past terminal accumulation points' of the horizon generators. Thus, the theorems may be interpreted as giving support to Hawking's 'Chronology Protection Conjecture', according to which the laws of physics prevent one from manufacturing a 'time machine'. Specifically, we prove: Theorem 1. There is no extension to (M,g_{ab}) of the usual field algebra on the initial globally hyperbolic region which satisfies the condition of F-locality at any base point. In other words, any extension of the field algebra must, in any globally hyperbolic neighbourhood of any base point, differ from the algebra one would define on that neighbourhood according to the rules for globally hyperbolic spacetimes. Theorem 2. The two-point distribution for any Hadamard state defined on the initial globally hyperbolic region must (when extended to a distributional bisolution of the covariant Klein-Gordon equation on the full spacetime) be singular at every base point x in the sense that the difference between this two point distribution and a local Hadamard distribution cannot be given by a bounded function in any neighbourhood (in M 2 M) of (x,x). In consequence of Theorem 2, quantities such as the renormalized expectation value of J2 or of the stress-energy tensor are necessarily ill-defined or singular at any base point. The proof of these theorems relies on the 'Propagation of Singularities' theorems of Duistermaat and Hörmander.
NASA Astrophysics Data System (ADS)
Wu, Bin; Yin, Hongxi; Qin, Jie; Liu, Chang; Liu, Anliang; Shao, Qi; Xu, Xiaoguang
2016-09-01
Aiming at the increasing demand of the diversification services and flexible bandwidth allocation of the future access networks, a flexible passive optical network (PON) scheme combining time and wavelength division multiplexing (TWDM) with point-to-point wavelength division multiplexing (PtP WDM) overlay is proposed for the next-generation optical access networks in this paper. A novel software-defined optical distribution network (ODN) structure is designed based on wavelength selective switches (WSS), which can implement wavelength and bandwidth dynamical allocations and suits for the bursty traffic. The experimental results reveal that the TWDM-PON can provide 40 Gb/s downstream and 10 Gb/s upstream data transmission, while the PtP WDM-PON can support 10 GHz point-to-point dedicated bandwidth as the overlay complement system. The wavelengths of the TWDM-PON and PtP WDM-PON are allocated dynamically based on WSS, which verifies the feasibility of the proposed structure.
Sampled control stability of the ESA instrument pointing system
NASA Astrophysics Data System (ADS)
Thieme, G.; Rogers, P.; Sciacovelli, D.
Stability analysis and simulation results are presented for the ESA Instrument Pointing System (IPS) that is to be used in Spacelab's second launch. Of the two IPS plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and its payload, while the other follows the NASA practice of defining an IPS-Spacelab 2 plant configuration through a structural finite element model, which is then used to generate modal data for various pointing directions. In both cases, the IPS dynamic plant model is truncated, then discretized at the sampling frequency and interfaces to a PID-based control law. A stability analysis has been carried out in discrete domain for various instrument pointing directions, taking into account suitable parameter variation ranges. A number of time simulations are presented.
Advanced mobility handover for mobile IPv6 based wireless networks.
Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an Advanced Mobility Handover scheme (AMH) in this paper for seamless mobility in MIPv6-based wireless networks. In the proposed scheme, the mobile node utilizes a unique home IPv6 address developed to maintain communication with other corresponding nodes without a care-of-address during the roaming process. The IPv6 address for each MN during the first round of AMH process is uniquely identified by HA using the developed MN-ID field as a global permanent, which is identifying uniquely the IPv6 address of MN. Moreover, a temporary MN-ID is generated by access point each time an MN is associated with a particular AP and temporarily saved in a developed table inside the AP. When employing the AMH scheme, the handover process in the network layer is performed prior to its default time. That is, the mobility handover process in the network layer is tackled by a trigger developed AMH message to the next access point. Thus, a mobile node keeps communicating with the current access point while the network layer handover is executed by the next access point. The mathematical analyses and simulation results show that the proposed scheme performs better as compared with the existing approaches.
Somerson, Jacob; Plaxco, Kevin W
2018-04-15
The ability to measure the concentration of specific small molecules continuously and in real-time in complex sample streams would impact many areas of agriculture, food safety, and food production. Monitoring for mycotoxin taint in real time during food processing, for example, could improve public health. Towards this end, we describe here an inexpensive electrochemical DNA-based sensor that supports real-time monitor of the mycotoxin ochratoxin A in a flowing stream of foodstuffs.
New Encryption Scheme of One-Time Pad Based on KDC
NASA Astrophysics Data System (ADS)
Xie, Xin; Chen, Honglei; Wu, Ying; Zhang, Heng; Wu, Peng
As more and more leakage incidents come up, traditional encryption system has not adapted to the complex and volatile network environment, so, there should be a new encryption system that can protect information security very well, this is the starting point of this paper. Based on DES and RSA encryption system, this paper proposes a new scheme of one time pad, which really achieves "One-time pad" and provides information security a new and more reliable encryption method.
An accelerated hologram calculation using the wavefront recording plane method and wavelet transform
NASA Astrophysics Data System (ADS)
Arai, Daisuke; Shimobaba, Tomoyoshi; Nishitsuji, Takashi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi
2017-06-01
Fast hologram calculation methods are critical in real-time holography applications such as three-dimensional (3D) displays. We recently proposed a wavelet transform-based hologram calculation called WASABI. Even though WASABI can decrease the calculation time of a hologram from a point cloud, it increases the calculation time with increasing propagation distance. We also proposed a wavefront recoding plane (WRP) method. This is a two-step fast hologram calculation in which the first step calculates the superposition of light waves emitted from a point cloud in a virtual plane, and the second step performs a diffraction calculation from the virtual plane to the hologram plane. A drawback of the WRP method is in the first step when the point cloud has a large number of object points and/or a long distribution in the depth direction. In this paper, we propose a method combining WASABI and the WRP method in which the drawbacks of each can be complementarily solved. Using a consumer CPU, the proposed method succeeded in performing a hologram calculation with 2048 × 2048 pixels from a 3D object with one million points in approximately 0.4 s.
Can, Mehmet Mustafa; Kaymaz, Cihangir
2010-08-01
Pulmonary arterial hypertension (PAH) is a rare, fatal and progressive disease. There is an acceleration in the advent of new therapies in parallel to the development of the knowledge about etiogenesis and pathogenesis of PAH. Therefore, to optimize the goals of PAH-specific treatment and to determine the time to shift from monotherapy to combination therapy, simple, objective and reproducible end-points, which may predict the disease severity, progression rate and life expectancy are needed. The adventure of end points in PAH has started with six minute walk distance and functional capacity, and continues with new parameters (biochemical marker, time to clinical worsening, echocardiography and magnetic resonance imaging etc.), which can better reflect the clinical outcome.
2013-01-01
Background Mobile technology offers the potential to deliver health-related interventions to individuals who would not otherwise present for in-person treatment. Text messaging (short message service, SMS), being the most ubiquitous form of mobile communication, is a promising method for reaching the most individuals. Objective The goal of the present study was to evaluate the feasibility and preliminary efficacy of a smoking cessation intervention program delivered through text messaging. Methods Adult participants (N=60, age range 18-52 years) took part in a single individual smoking cessation counseling session, and were then randomly assigned to receive either daily non-smoking related text messages (control condition) or the TXT-2-Quit (TXT) intervention. TXT consisted of automated smoking cessation messages tailored to individual’s stage of smoking cessation, specialized messages provided on-demand based on user requests for additional support, and a peer-to-peer social support network. Generalized estimating equation analysis was used to assess the primary outcome (7-day point-prevalence abstinence) using a 2 (treatment groups)×3 (time points) repeated measures design across three time points: 8 weeks, 3 months, and 6 months. Results Smoking cessation results showed an overall significant group difference in 7-day point prevalence abstinence across all follow-up time points. Individuals given the TXT intervention, with higher odds of 7-day point prevalence abstinence for the TXT group compared to the Mojo group (OR=4.52, 95% CI=1.24, 16.53). However, individual comparisons at each time point did not show significant between-group differences, likely due to reduced statistical power. Intervention feasibility was greatly improved by switching from traditional face-to-face recruitment methods (4.7% yield) to an online/remote strategy (41.7% yield). Conclusions Although this study was designed to develop and provide initial testing of the TXT-2-Quit system, these initial findings provide promising evidence that a text-based intervention can be successfully implemented with a diverse group of adult smokers. Trial Registration ClinicalTrials.gov: NCT01166464; http://clinicaltrials.gov/ct2/show/NCT01166464 (Archived by WebCite at http://www.webcitation.org/6IOE8XdE0). PMID:25098502
van Eijk, Ruben PA; Eijkemans, Marinus JC; Rizopoulos, Dimitris
2018-01-01
Objective Amyotrophic lateral sclerosis (ALS) clinical trials based on single end points only partially capture the full treatment effect when both function and mortality are affected, and may falsely dismiss efficacious drugs as futile. We aimed to investigate the statistical properties of several strategies for the simultaneous analysis of function and mortality in ALS clinical trials. Methods Based on the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database, we simulated longitudinal patterns of functional decline, defined by the revised amyotrophic lateral sclerosis functional rating scale (ALSFRS-R) and conditional survival time. Different treatment scenarios with varying effect sizes were simulated with follow-up ranging from 12 to 18 months. We considered the following analytical strategies: 1) Cox model; 2) linear mixed effects (LME) model; 3) omnibus test based on Cox and LME models; 4) composite time-to-6-point decrease or death; 5) combined assessment of function and survival (CAFS); and 6) test based on joint modeling framework. For each analytical strategy, we calculated the empirical power and sample size. Results Both Cox and LME models have increased false-negative rates when treatment exclusively affects either function or survival. The joint model has superior power compared to other strategies. The composite end point increases false-negative rates among all treatment scenarios. To detect a 15% reduction in ALSFRS-R decline and 34% decline in hazard with 80% power after 18 months, the Cox model requires 524 patients, the LME model 794 patients, the omnibus test 526 patients, the composite end point 1,274 patients, the CAFS 576 patients and the joint model 464 patients. Conclusion Joint models have superior statistical power to analyze simultaneous effects on survival and function and may circumvent pitfalls encountered by other end points. Optimizing trial end points is essential, as selecting suboptimal outcomes may disguise important treatment clues. PMID:29593436
NASA Astrophysics Data System (ADS)
Klapa, Przemyslaw; Mitka, Bartosz; Zygmunt, Mariusz
2017-12-01
Capability of obtaining a multimillion point cloud in a very short time has made the Terrestrial Laser Scanning (TLS) a widely used tool in many fields of science and technology. The TLS accuracy matches traditional devices used in land surveying (tacheometry, GNSS - RTK), but like any measurement it is burdened with error which affects the precise identification of objects based on their image in the form of a point cloud. The point’s coordinates are determined indirectly by means of measuring the angles and calculating the time of travel of the electromagnetic wave. Each such component has a measurement error which is translated into the final result. The XYZ coordinates of a measuring point are determined with some uncertainty and the very accuracy of determining these coordinates is reduced as the distance to the instrument increases. The paper presents the results of examination of geometrical stability of a point cloud obtained by means terrestrial laser scanner and accuracy evaluation of solids determined using the cloud. Leica P40 scanner and two different settings of measuring points were used in the tests. The first concept involved placing a few balls in the field and then scanning them from various sides at similar distances. The second part of measurement involved placing balls and scanning them a few times from one side but at varying distances from the instrument to the object. Each measurement encompassed a scan of the object with automatic determination of its position and geometry. The desk studies involved a semiautomatic fitting of solids and measurement of their geometrical elements, and comparison of parameters that determine their geometry and location in space. The differences of measures of geometrical elements of balls and translations vectors of the solids centres indicate the geometrical changes of the point cloud depending on the scanning distance and parameters. The results indicate the changes in the geometry of scanned objects depending on the point cloud quality and distance from the measuring instrument. Varying geometrical dimensions of the same element suggest also that the point cloud does not keep a stable geometry of measured objects.
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis
Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher
2015-01-01
Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87 m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2 cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.
Liu, Bao; Fan, Xiaoming; Huo, Shengnan; Zhou, Lili; Wang, Jun; Zhang, Hui; Hu, Mei; Zhu, Jianhua
2011-12-01
A method was established to analyse the overlapped chromatographic peaks based on the chromatographic-spectra data detected by the diode-array ultraviolet detector. In the method, the three-dimensional data were de-noised and normalized firstly; secondly the differences and clustering analysis of the spectra at different time points were calculated; then the purity of the whole chromatographic peak were analysed and the region were sought out in which the spectra of different time points were stable. The feature spectra were extracted from the spectrum-stable region as the basic foundation. The nonnegative least-square method was chosen to separate the overlapped peaks and get the flow curve which was based on the feature spectrum. The three-dimensional divided chromatographic-spectrum peak could be gained by the matrix operations of the feature spectra with the flow curve. The results displayed that this method could separate the overlapped peaks.
Duc, Myriam; Gaboriaud, Fabien; Thomas, Fabien
2005-09-01
The effects of experimental procedures on the acid-base consumption titration curves of montmorillonite suspension were studied using continuous potentiometric titration. For that purpose, the hysteresis amplitudes between the acid and base branches were found to be useful to systematically evaluate the impacts of storage conditions (wet or dried), the atmosphere in titration reactor, the solid-liquid ratio, the time interval between successive increments, and the ionic strength. In the case of storage conditions, the increase of the hysteresis was significantly higher for longer storage of clay in suspension and drying procedures compared to "fresh" clay suspension. The titration carried out under air demonstrated carbonate contamination that could only be cancelled by performing experiments under inert gas. Interestingly, the increase of the time intervals between successive increments of titrant strongly emphasized the amplitude of hysteresis, which could be correlated with the slow kinetic process specifically observed for acid addition in acid media. Thus, such kinetic behavior is probably associated with dissolution processes of clay particles. However, the resulting curves recorded at different ionic strengths under optimized conditions did not show the common intersection point required to define point of zero charge. Nevertheless, the ionic strength dependence of the point of zero net proton charge suggested that the point of zero charge of sodic montmorillonite could be estimated as lower than 5.
Trends in Performance-Based Funding. Data Points: Volume 5, Issue 19
ERIC Educational Resources Information Center
American Association of Community Colleges, 2017
2017-01-01
States' use of postsecondary performance-based funding is intended to encourage colleges to improve student outcomes. The model relies on indicators such as course completion, time to degree, transfer rates, number of credentials awarded and the number of low-income and minority graduates served. Currently, 21 states use performance-based funding…
Mental Health Referrals: A Survey of Practicing School Psychologists
ERIC Educational Resources Information Center
Villarreal, Victor
2018-01-01
Schools are the most common entry point for accessing mental health services for school-age children. Children may access school-based services as part of a multitiered system of supports or other service models, such as school-based health centers. However, limitations associated with school-based services (e.g., lack of time, inability to…
Pourhassan, Mojgan; Neumann, Frank
2018-06-22
The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.; Iijima, Byron; Meyer, Robert; Bar-Sever, Yoaz; Accad, Elie
2004-01-01
This paper evaluates the performance of a single-frequency receiver using the 1-Hz differential corrections as provided by NASA's global differential GPS system. While the dual-frequency user has the ability to eliminate the ionosphere error by taking a linear combination of observables, the single-frequency user must remove or calibrate this error by other means. To remove the ionosphere error we take advantage of the fact that the magnitude of the group delay in range observable and the carrier phase advance have the same magnitude but are opposite in sign. A way to calibrate this error is to use a real-time database of grid points computed by JPL's RTI (Real-Time Ionosphere) software. In both cases we evaluate the positional accuracy of a kinematic carrier phase based point positioning method on a global extent.
NASA Astrophysics Data System (ADS)
Nurhaida, Subanar, Abdurakhman, Abadi, Agus Maman
2017-08-01
Seismic data is usually modelled using autoregressive processes. The aim of this paper is to find the arrival times of the seismic waves of Mt. Rinjani in Indonesia. Kitagawa algorithm's is used to detect the seismic P and S-wave. Householder transformation used in the algorithm made it effectively finding the number of change points and parameters of the autoregressive models. The results show that the use of Box-Cox transformation on the variable selection level makes the algorithm works well in detecting the change points. Furthermore, when the basic span of the subinterval is set 200 seconds and the maximum AR order is 20, there are 8 change points which occur at 1601, 2001, 7401, 7601,7801, 8001, 8201 and 9601. Finally, The P and S-wave arrival times are detected at time 1671 and 2045 respectively using a precise detection algorithm.
Yakhforoshha, Afsaneh; Emami, Seyed Amir Hossein; Shahi, Farhad; Shahsavari, Saeed; Cheraghi, Mohammadali; Mojtahedzadeh, Rita; Mahmoodi-Bakhtiari, Behrooz; Shirazi, Mandana
2018-02-21
The task of breaking bad news (BBN) may be improved by incorporating simulation with art-based teaching methods. The aim of the present study was to assess the effect of an integrating simulation with art-based teaching strategies, on fellows' performance regarding BBN, in Iran. The study was carried out using quasi-experimental methods, interrupted time series. The participants were selected from medical oncology fellows at two teaching hospitals of Tehran University of Medical Sciences (TUMS), Iran. Participants were trained through workshop, followed by engaging participants with different types of art-based teaching methods. In order to assess the effectiveness of the integrating model, fellows' performance was rated by two independent raters (standardized patients (SPs) and faculty members) using the BBN assessment checklist. This assessment tool measured seven different domains of BBN skill. Segmented regression was used to analyze the results of study. Performance of all oncology fellows (n = 19) was assessed for 228 time points during the study, by rating three time points before and three time points after the intervention by two raters. Based on SP ratings, fellows' performance scores in post-training showed significant level changes in three domains of BBN checklist (B = 1.126, F = 3.221, G = 2.241; p < 0.05). Similarly, the significant level change in fellows' score rated by faculty members in post-training was B = 1.091, F = 3.273, G = 1.724; p < 0.05. There was no significant change in trend of fellows' performance after the intervention. Our results showed that using an integrating simulation with art-based teaching strategies may help oncology fellows to improve their communication skills in different facets of BBN performance. Iranian Registry of Clinical Trials ID: IRCT2016011626039N1.
Evaluation of effects of long term exposure on lethal toxicity with mammals.
Verma, Vibha; Yu, Qiming J; Connell, Des W
2014-02-01
The relationship between exposure time (LT50) and lethal exposure concentration (LC50) has been evaluated over relatively long exposure times using a novel parameter, Normal Life Expectancy (NLT), as a long term toxicity point. The model equation, ln(LT50) = aLC50(ν) + b, where a, b and ν are constants, was evaluated by plotting lnLT50 against LC50 using available toxicity data based on inhalation exposure from 7 species of mammals. With each specific toxicant a single consistent relationship was observed for all mammals with ν always <1. Use of NLT as a long term toxicity point provided a valuable limiting point for long exposure times. With organic compounds, the Kow can be used to calculate the model constants a and v where these are unknown. The model can be used to characterise toxicity to specific mammals and then be extended to estimate toxicity at any exposure time with other mammals. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.
2006-01-01
We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing
2009-07-01
Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images
Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun
2013-01-01
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608
EFL Learners' Production of Questions over Time: Linguistic, Usage-Based, and Developmental Features
ERIC Educational Resources Information Center
Nekrasova-Beker, Tatiana M.
2011-01-01
The recognition of second language (L2) development as a dynamic process has led to different claims about how language development unfolds, what represents a learner's linguistic system (i.e., interlanguage) at a certain point in time, and how that system changes over time (Verspoor, de Bot, & Lowie, 2011). Responding to de Bot and…
Zhang, Xudong
2002-10-01
This work describes a new approach that allows an angle-domain human movement model to generate, via forward kinematics, Cartesian-space human movement representation with otherwise inevitable end-point offset nullified but much of the kinematic authenticity retained. The approach incorporates a rectification procedure that determines the minimum postural angle change at the final frame to correct the end-point offset, and a deformation procedure that deforms the angle profile accordingly to preserve maximum original kinematic authenticity. Two alternative deformation schemes, named amplitude-proportional (AP) and time-proportional (TP) schemes, are proposed and formulated. As an illustration and empirical evaluation, the proposed approach, along with two deformation schemes, was applied to a set of target-directed right-hand reaching movements that had been previously measured and modeled. The evaluation showed that both deformation schemes nullified the final frame end-point offset and significantly reduced time-averaged position errors for the end-point as well as the most distal intermediate joint while causing essentially no change in the remaining joints. A comparison between the two schemes based on time-averaged joint and end-point position errors indicated that overall the TP scheme outperformed the AP scheme. In addition, no statistically significant difference in time-averaged angle error was identified between the raw prediction and either of the deformation schemes, nor between the two schemes themselves, suggesting minimal angle-domain distortion incurred by the deformation.
Enhancing Teacher Beliefs through an Inquiry-Based Professional Development Program
McKeown, Tammy R.; Abrams, Lisa M.; Slattum, Patricia W.; Kirk, Suzanne V.
2017-01-01
Inquiry-based instructional approaches are an effective means to actively engage students with science content and skills. This article examines the effects of an ongoing professional development program on middle and high school teachers’ efficacy beliefs, confidence to teach research concepts and skills, and science content knowledge. Professional development activities included participation in a week long summer academy, designing and implementing inquiry-based lessons within the classroom, examining and reflecting upon practices, and documenting ways in which instruction was modified. Teacher beliefs were assessed at three time points, pre- post- and six months following the summer academy. Results indicate significant gains in reported teaching efficacy, confidence, and content knowledge from pre- to post-test. These gains were maintained at the six month follow-up. Findings across the three different time points suggest that participation in the professional development program strongly influenced participants’ fundamental beliefs about their capacity to provide effective instruction in ways that are closely connected to the features of inquiry-based instruction. PMID:29732236
Enhancing Teacher Beliefs through an Inquiry-Based Professional Development Program.
McKeown, Tammy R; Abrams, Lisa M; Slattum, Patricia W; Kirk, Suzanne V
2016-01-01
Inquiry-based instructional approaches are an effective means to actively engage students with science content and skills. This article examines the effects of an ongoing professional development program on middle and high school teachers' efficacy beliefs, confidence to teach research concepts and skills, and science content knowledge. Professional development activities included participation in a week long summer academy, designing and implementing inquiry-based lessons within the classroom, examining and reflecting upon practices, and documenting ways in which instruction was modified. Teacher beliefs were assessed at three time points, pre- post- and six months following the summer academy. Results indicate significant gains in reported teaching efficacy, confidence, and content knowledge from pre- to post-test. These gains were maintained at the six month follow-up. Findings across the three different time points suggest that participation in the professional development program strongly influenced participants' fundamental beliefs about their capacity to provide effective instruction in ways that are closely connected to the features of inquiry-based instruction.
Giant Dirac point shift of graphene phototransistors by doped silicon substrate current
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shimatani, Masaaki; Ogawa, Shinpei, E-mail: Ogawa.Shimpei@eb.MitsubishiElectric.co.jp; Fujisawa, Daisuke
2016-03-15
Graphene is a promising new material for photodetectors due to its excellent optical properties and high-speed response. However, graphene-based phototransistors have low responsivity due to the weak light absorption of graphene. We have observed a giant Dirac point shift upon white light illumination in graphene-based phototransistors with n-doped Si substrates, but not those with p-doped substrates. The source-drain current and substrate current were investigated with and without illumination for both p-type and n-type Si substrates. The decay time of the drain-source current indicates that the Si substrate, SiO{sub 2} layer, and metal electrode comprise a metal-oxide-semiconductor (MOS) capacitor due tomore » the presence of defects at the interface between the Si substrate and SiO{sub 2} layer. The difference in the diffusion time of the intrinsic major carriers (electrons) and the photogenerated electron-hole pairs to the depletion layer delays the application of the gate voltage to the graphene channel. Therefore, the giant Dirac point shift is attributed to the n-type Si substrate current. This phenomenon can be exploited to realize high-performance graphene-based phototransistors.« less
Experimental results for the rapid determination of the freezing point of fuels
NASA Technical Reports Server (NTRS)
Mathiprakasam, B.
1984-01-01
Two methods for the rapid determination of the freezing point of fuels were investigated: an optical method, which detected the change in light transmission from the disappearance of solid particles in the melted fuel; and a differential thermal analysis (DTA) method, which sensed the latent heat of fusion. A laboratory apparatus was fabricated to test the two methods. Cooling was done by thermoelectric modules using an ice-water bath as a heat sink. The DTA method was later modified to eliminate the reference fuel. The data from the sample were digitized and a point of inflection, which corresponds to the ASTM D-2386 freezing point (final melting point), was identified from the derivative. The apparatus was modifified to cool the fuel to -60 C and controls were added for maintaining constant cooling rate, rewarming rate, and hold time at minimum temperature. A parametric series of tests were run for twelve fuels with freezing points from -10 C to -50 C, varying cooling rate, rewarming rate, and hold time. Based on the results, an optimum test procedure was established. The results showed good agreement with ASTM D-2386 freezing point and differential scanning calorimetry results.
Yu, Yifei; Luo, Linqing; Li, Bo; Guo, Linfeng; Yan, Jize; Soga, Kenichi
2015-10-01
The measured distance error caused by double peaks in the BOTDRs (Brillouin optical time domain reflectometers) system is a kind of Brillouin scattering spectrum (BSS) deformation, discussed and simulated for the first time in the paper, to the best of the authors' knowledge. Double peak, as a kind of Brillouin spectrum deformation, is important in the enhancement of spatial resolution, measurement accuracy, and crack detection. Due to the variances of the peak powers of the BSS along the fiber, the measured starting point of a step-shape frequency transition region is shifted and results in distance errors. Zero-padded short-time-Fourier-transform (STFT) can restore the transition-induced double peaks in the asymmetric and deformed BSS, thus offering more accurate and quicker measurements than the conventional Lorentz-fitting method. The recovering method based on the double-peak detection and corresponding BSS deformation can be applied to calculate the real starting point, which can improve the distance accuracy of the STFT-based BOTDR system.
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1984-01-01
Concepts to save fuel while preserving airport capacity by combining time based metering with profile descent procedures were developed. A computer algorithm is developed to provide the flight crew with the information needed to fly from an entry fix to a metering fix and arrive there at a predetermined time, altitude, and airspeed. The flight from the metering fix to an aim point near the airport was calculated. The flight path is divided into several descent and deceleration segments. Descents are performed at constant Mach numbers or calibrated airspeed, whereas decelerations occur at constant altitude. The time and distance associated with each segment are calculated from point mass equations of motion for a clean configuration with idle thrust. Wind and nonstandard atmospheric properties have a large effect on the flight path. It is found that uncertainty in the descent Mach number has a large effect on the predicted flight time. Of the possible combinations of Mach number and calibrated airspeed for a descent, only small changes were observed in the fuel consumed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsh, I; Otto, M; Weichert, J
Purpose: The focus of this work is to perform Monte Carlo-based dosimetry for several pediatric cancer xenografts in mice treated with a novel radiopharmaceutical {sup 131}I-CLR1404. Methods: Four mice for each tumor cell line were injected with 8–13 µCi/g of the {sup 124}124I-CLR1404. PET/CT images of each individual mouse were acquired at 5–6 time points over the span of 96–170 hours post-injection. Following acquisition, the images were co-registered, resampled, rescaled, corrected for partial volume effects (PVE), and masked. For this work the pre-treatment PET images of {sup 124}I-CLR1404 were used to predict therapeutic doses from {sup 131}I-CLR1404 at each timemore » point by assuming the same injection activity and accounting for the difference in physical decay rates. Tumors and normal tissues were manually contoured using anatomical and functional images. The CT and the PET images were used in the Geant4 (v9.6) Monte Carlo simulation to define the geometry and source distribution, respectively. The total cumulated absorbed dose was calculated by numerically integrating the dose-rate at each time point over all time on a voxel-by-voxel basis. Results: Spatial distributions of the absorbed dose rates and dose volume histograms as well as mean, minimum, maximum, and total dose values for each ROI were generated for each time point. Conclusion: This work demonstrates how mouse-specific MC-based dosimetry could potentially provide more accurate characterization of efficacy of novel radiopharmaceuticals in radionuclide therapy. This work is partially funded by NIH grant CA198392.« less
Brovarets', Ol'ha O; Hovorun, Dmytro M
2014-01-01
The ground-state tautomerization of the G·C Watson-Crick base pair by the double proton transfer (DPT) was comprehensively studied in vacuo and in the continuum with a low dielectric constant (ϵ = 4), corresponding to a hydrophobic interface of protein-nucleic acid interactions, using DFT and MP2 levels of quantum-mechanical (QM) theory and quantum theory "Atoms in molecules" (QTAIM). Based on the sweeps of the electron-topological, geometric, polar, and energetic parameters, which describe the course of the G·C ↔ G*·C* tautomerization (mutagenic tautomers of the G and C bases are marked with an asterisk) through the DPT along the intrinsic reaction coordinate (IRC), it was proved that it is, strictly speaking, a concerted asynchronous process both at the DFT and MP2 levels of theory, in which protons move with a small time gap in vacuum, while this time delay noticeably increases in the continuum with ϵ = 4. It was demonstrated using the conductor-like polarizable continuum model (CPCM) that the continuum with ϵ = 4 does not qualitatively affect the course of the tautomerization reaction. The DPT in the G·C Watson-Crick base pair occurs without any intermediates both in vacuum and in the continuum with ϵ = 4 at the DFT/MP2 levels of theory. The nine key points along the IRC of the G·C base pair tautomerization, which could be considered as electron-topological "fingerprints" of a concerted asynchronous process of the tautomerization via the DPT, have been identified and fully characterized. These key points have been used to define the reactant, transition state, and product regions of the DPT reaction in the G·C base pair. Analysis of the energetic characteristics of the H-bonds allows us to arrive at a definite conclusion that the middle N1H⋯N3/N3H⋯N1 and the lower N2H⋯O2/N2H⋯O2 parallel H-bonds in the G·C/G*·C* base pairs, respectively, are anticooperative, that is, the strengthening of the middle H-bond is accompanied by the weakening of the lower H-bond. At that point, the upper N4H⋯O6 and O6H⋯N4 H-bonds in the G·C and G*·C* base pairs, respectively, remain constant at the changes of the middle and the lower H-bonds at the beginning and at the ending of the G·C ↔ G*·C* tautomerization. Aiming to answer the question posed in the title of the article, we established that the G*·C* Löwdin's base pair satisfies all the requirements necessary to cause point mutations in DNA except its lifetime, which is much less than the period of time required for the replication machinery to forcibly dissociate a base pair into the monomers (several ns) during DNA replication. So, from the physicochemical point of view, the G*·C* Löwdin's base pair cannot be considered as a source of point mutations arising during DNA replication.
Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn
2015-01-01
Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect's 3D body point's time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point's time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point's time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters' walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman's bias and limits of agreement. Body point's time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point's time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner.
NASA Astrophysics Data System (ADS)
Spaans, K.; Hooper, A. J.
2017-12-01
The short revisit time and high data acquisition rates of current satellites have resulted in increased interest in the development of deformation monitoring and rapid disaster response capability, using InSAR. Fast, efficient data processing methodologies are required to deliver the timely results necessary for this, and also to limit computing resources required to process the large quantities of data being acquired. Contrary to volcano or earthquake applications, urban monitoring requires high resolution processing, in order to differentiate movements between buildings, or between buildings and the surrounding land. Here we present Rapid time series InSAR (RapidSAR), a method that can efficiently update high resolution time series of interferograms, and demonstrate its effectiveness over urban areas. The RapidSAR method estimates the coherence of pixels on an interferogram-by-interferogram basis. This allows for rapid ingestion of newly acquired images without the need to reprocess the earlier acquired part of the time series. The coherence estimate is based on ensembles of neighbouring pixels with similar amplitude behaviour through time, which are identified on an initial set of interferograms, and need be re-evaluated only occasionally. By taking into account scattering properties of points during coherence estimation, a high quality coherence estimate is achieved, allowing point selection at full resolution. The individual point selection maximizes the amount of information that can be extracted from each interferogram, as no selection compromise has to be reached between high and low coherence interferograms. In other words, points do not have to be coherent throughout the time series to contribute to the deformation time series. We demonstrate the effectiveness of our method over urban areas in the UK. We show how the algorithm successfully extracts high density time series from full resolution Sentinel-1 interferograms, and distinguish clearly between buildings and surrounding vegetation or streets. The fact that new interferograms can be processed separately from the remainder of the time series helps manage the high data volumes, both in space and time, generated by current missions.
Delphi, Maryam; Lotfi, M-Yones; Moossavi, Abdollah; Bakhshi, Enayatollah; Banimostafa, Maryam
2017-09-01
Previous studies have shown that interaural-time-difference (ITD) training can improve localization ability. Surprisingly little is, however, known about localization training vis-à-vis speech perception in noise based on interaural time difference in the envelope (ITD ENV). We sought to investigate the reliability of an ITD ENV-based training program in speech-in-noise perception among elderly individuals with normal hearing and speech-in-noise disorder. The present interventional study was performed during 2016. Sixteen elderly men between 55 and 65 years of age with the clinical diagnosis of normal hearing up to 2000 Hz and speech-in-noise perception disorder participated in this study. The training localization program was based on changes in ITD ENV. In order to evaluate the reliability of the training program, we performed speech-in-noise tests before the training program, immediately afterward, and then at 2 months' follow-up. The reliability of the training program was analyzed using the Friedman test and the SPSS software. Significant statistical differences were shown in the mean scores of speech-in-noise perception between the 3 time points (P=0.001). The results also indicated no difference in the mean scores of speech-in-noise perception between the 2 time points of immediately after the training program and 2 months' follow-up (P=0.212). The present study showed the reliability of an ITD ENV-based localization training in elderly individuals with speech-in-noise perception disorder.
TaqMan based real time PCR assay targeting EML4-ALK fusion transcripts in NSCLC.
Robesova, Blanka; Bajerova, Monika; Liskova, Kvetoslava; Skrickova, Jana; Tomiskova, Marcela; Pospisilova, Sarka; Mayer, Jiri; Dvorakova, Dana
2014-07-01
Lung cancer with the ALK rearrangement constitutes only a small fraction of patients with non-small cell lung cancer (NSCLC). However, in the era of molecular-targeted therapy, efficient patient selection is crucial for successful treatment. In this context, an effective method for EML4-ALK detection is necessary. We developed a new highly sensitive variant specific TaqMan based real time PCR assay applicable to RNA from formalin-fixed paraffin-embedded tissue (FFPE). This assay was used to analyze the EML4-ALK gene in 96 non-selected NSCLC specimens and compared with two other methods (end-point PCR and break-apart FISH). EML4-ALK was detected in 33/96 (34%) specimens using variant specific real time PCR, whereas in only 23/96 (24%) using end-point PCR. All real time PCR positive samples were confirmed with direct sequencing. A total of 46 specimens were subsequently analyzed by all three detection methods. Using variant specific real time PCR we identified EML4-ALK transcript in 17/46 (37%) specimens, using end-point PCR in 13/46 (28%) specimens and positive ALK rearrangement by FISH was detected in 8/46 (17.4%) specimens. Moreover, using variant specific real time PCR, 5 specimens showed more than one EML4-ALK variant simultaneously (in 2 cases the variants 1+3a+3b, in 2 specimens the variants 1+3a and in 1 specimen the variant 1+3b). In one case of 96 EML4-ALK fusion gene and EGFR mutation were detected. All simultaneous genetic variants were confirmed using end-point PCR and direct sequencing. Our variant specific real time PCR assay is highly sensitive, fast, financially acceptable, applicable to FFPE and seems to be a valuable tool for the rapid prescreening of NSCLC patients in clinical practice, so, that most patients able to benefit from targeted therapy could be identified. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Implementation of a web-based medication tracking system in a large academic medical center.
Calabrese, Sam V; Williams, Jonathan P
2012-10-01
Pharmacy workflow efficiencies achieved through the use of an electronic medication-tracking system are described. Medication dispensing turnaround times at the inpatient pharmacy of a large hospital were evaluated before and after transition from manual medication tracking to a Web-based tracking process involving sequential bar-code scanning and real-time monitoring of medication status. The transition was carried out in three phases: (1) a workflow analysis, including the identification of optimal points for medication scanning with hand-held wireless devices, (2) the phased implementation of an automated solution and associated hardware at a central dispensing pharmacy and three satellite locations, and (3) postimplementation data collection to evaluate the impact of the new tracking system and areas for improvement. Relative to the manual tracking method, electronic medication tracking allowed the capture of far more data points, enabling the pharmacy team to delineate the time required for each step of the medication dispensing process and to identify the steps most likely to involve delays. A comparison of baseline and postimplementation data showed substantial reductions in overall medication turnaround times with the use of the Web-based tracking system (time reductions of 45% and 22% at the central and satellite sites, respectively). In addition to more accurate projections and documentation of turnaround times, the Web-based tracking system has facilitated quality-improvement initiatives. Implementation of an electronic tracking system for monitoring the delivery of medications provided a comprehensive mechanism for calculating turnaround times and allowed the pharmacy to identify bottlenecks within the medication distribution system. Altering processes removed these bottlenecks and decreased delivery turnaround times.
NASA Astrophysics Data System (ADS)
Li, Weidong; Shan, Xinjian; Qu, Chunyan
2010-11-01
In comparison with polar-orbiting satellites, geostationary satellites have a higher time resolution and wider field of visions, which can cover eleven time zones (an image covers about one third of the Earth's surface). For a geostationary satellite panorama graph at a point of time, the brightness temperature of different zones is unable to represent the thermal radiation information of the surface at the same point of time because of the effect of different sun solar radiation. So it is necessary to calibrate brightness temperature of different zones with respect to the same point of time. A model of calibrating the differences of the brightness temperature of geostationary satellite generated by time zone differences is suggested in this study. A total of 16 curves of four positions in four different stages are given through sample statistics of brightness temperature of every 5 days synthetic data which are from four different time zones (time zones 4, 6, 8, and 9). The above four stages span January -March (winter), April-June (spring), July-September (summer), and October-December (autumn). Three kinds of correct situations and correct formulas based on curves changes are able to better eliminate brightness temperature rising or dropping caused by time zone differences.
NASA Astrophysics Data System (ADS)
De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan
2016-11-01
A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2017-11-01
Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.
Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis
NASA Astrophysics Data System (ADS)
Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song
2018-01-01
To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.
Cross Validation on the Equality of Uav-Based and Contour-Based Dems
NASA Astrophysics Data System (ADS)
Ma, R.; Xu, Z.; Wu, L.; Liu, S.
2018-04-01
Unmanned Aerial Vehicles (UAV) have been widely used for Digital Elevation Model (DEM) generation in geographic applications. This paper proposes a novel framework of generating DEM from UAV images. It starts with the generation of the point clouds by image matching, where the flight control data are used as reference for searching for the corresponding images, leading to a significant time saving. Besides, a set of ground control points (GCP) obtained from field surveying are used to transform the point clouds to the user's coordinate system. Following that, we use a multi-feature based supervised classification method for discriminating non-ground points from ground ones. In the end, we generate DEM by constructing triangular irregular networks and rasterization. The experiments are conducted in the east of Jilin province in China, which has been suffered from soil erosion for several years. The quality of UAV based DEM (UAV-DEM) is compared with that generated from contour interpolation (Contour-DEM). The comparison shows a higher resolution, as well as higher accuracy of UAV-DEMs, which contains more geographic information. In addition, the RMSE errors of the UAV-DEMs generated from point clouds with and without GCPs are ±0.5 m and ±20 m, respectively.
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu
2017-01-01
The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979
Unidirectional invisibility induced by parity-time symmetric circuit
NASA Astrophysics Data System (ADS)
Lv, Bo; Fu, Jiahui; Wu, Bian; Li, Rujiang; Zeng, Qingsheng; Yin, Xinhua; Wu, Qun; Gao, Lei; Chen, Wan; Wang, Zhefei; Liang, Zhiming; Li, Ao; Ma, Ruyu
2017-01-01
Parity-time (PT) symmetric structures present the unidirectional invisibility at the spontaneous PT-symmetry breaking point. In this paper, we propose a PT-symmetric circuit consisting of a resistor and a microwave tunnel diode (TD) which represent the attenuation and amplification, respectively. Based on the scattering matrix method, the circuit can exhibit an ideal unidirectional performance at the spontaneous PT-symmetry breaking point by tuning the transmission lines between the lumped elements. Additionally, the resistance of the reactance component can alter the bandwidth of the unidirectional invisibility flexibly. Furthermore, the electromagnetic simulation for the proposed circuit validates the unidirectional invisibility and the synchronization with the input energy well. Our work not only provides an unidirectional invisible circuit based on PT-symmetry, but also proposes a potential solution for the extremely selective filter or cloaking applications.
[The added value of information summaries supporting clinical decisions at the point-of-care.
Banzi, Rita; González-Lorenzo, Marien; Kwag, Koren Hyogene; Bonovas, Stefanos; Moja, Lorenzo
2016-11-01
Evidence-based healthcare requires the integration of the best research evidence with clinical expertise and patients' values. International publishers are developing evidence-based information services and resources designed to overcome the difficulties in retrieving, assessing and updating medical information as well as to facilitate a rapid access to valid clinical knowledge. Point-of-care information summaries are defined as web-based medical compendia that are specifically designed to deliver pre-digested, rapidly accessible, comprehensive, and periodically updated information to health care providers. Their validity must be assessed against marketing claims that they are evidence-based. We periodically evaluate the content development processes of several international point-of-care information summaries. The number of these products has increased along with their quality. The last analysis done in 2014 identified 26 products and found that three of them (Best Practice, Dynamed e Uptodate) scored the highest across all evaluated dimensions (volume, quality of the editorial process and evidence-based methodology). Point-of-care information summaries as stand-alone products or integrated with other systems, are gaining ground to support clinical decisions. The choice of one product over another depends both on the properties of the service and the preference of users. However, even the most innovative information system must rely on transparent and valid contents. Individuals and institutions should regularly assess the value of point-of-care summaries as their quality changes rapidly over time.
Does Delayed-Time-Point Imaging Improve 18F-FDG-PET in Patients With MALT Lymphoma?
Mayerhoefer, Marius E.; Giraudo, Chiara; Senn, Daniela; Hartenbach, Markus; Weber, Michael; Rausch, Ivo; Kiesewetter, Barbara; Herold, Christian J.; Hacker, Marcus; Pones, Matthias; Simonitsch-Klupp, Ingrid; Müllauer, Leonhard; Dolak, Werner; Lukas, Julius; Raderer, Markus
2016-01-01
Purpose To determine whether in patients with extranodal marginal zone B-cell lymphoma of the mucosa-associated lymphoid tissue lymphoma (MALT), delayed–time-point 2-18F-fluoro-2-deoxy-d-glucose-positron emission tomography (18F-FDG-PET) performs better than standard–time-point 18F-FDG-PET. Materials and Methods Patients with untreated histologically verified MALT lymphoma, who were undergoing pretherapeutic 18F-FDG-PET/computed tomography (CT) and consecutive 18F-FDG-PET/magnetic resonance imaging (MRI), using a single 18F-FDG injection, in the course of a larger-scale prospective trial, were included. Region-based sensitivity and specificity, and patient-based sensitivity of the respective 18F-FDG-PET scans at time points 1 (45–60 minutes after tracer injection, TP1) and 2 (100–150 minutes after tracer injection, TP2), relative to the reference standard, were calculated. Lesion-to-liver and lesion-to-blood SUVmax (maximum standardized uptake values) ratios were also assessed. Results 18F-FDG-PET at TP1 was true positive in 15 o f 23 involved regions, and 18F-FDG-PET at TP2 was true-positive in 20 of 23 involved regions; no false-positive regions were noted. Accordingly, region-based sensitivities and specificities were 65.2% (confidence interval [CI], 45.73%–84.67%) and 100% (CI, 100%-100%) for 18F-FDG-PET at TP1; and 87.0% (CI, 73.26%–100%) and 100% (CI, 100%-100%) for 18F-FDG-PET at TP2, respectively. FDG-PET at TP1 detected lymphoma in at least one nodal or extranodal region in 7 of 13 patients, and 18F-FDG-PET at TP2 in 10 of 13 patients; accordingly, patient-based sensitivity was 53.8% (CI, 26.7%–80.9%) for 18F-FDG-PET at TP1, and 76.9% (CI, 54.0%–99.8%) for 18F-FDG-PET at TP2. Lesion-to-liver and lesion-to-blood maximum standardized uptake value ratios were significantly lower at TP1 (ratios, 1.05 ± 0.40 and 1.52 ± 0.62) than at TP2 (ratios, 1.67 ± 0.74 and 2.56 ± 1.10; P = 0.003 and P = 0.001). Conclusions Delayed–time-point imaging may improve 18F-FDG-PET in MALT lymphoma. PMID:26402137
Shim, Hyunju; Ailshire, Jennifer; Zelinski, Elizabeth; Crimmins, Eileen
2018-05-25
The use of the internet for health information among older people is receiving increasing attention, but how it is associated with chronic health conditions and health service use at concurrent and subsequent time points using nationally representative data is less known. This study aimed to determine whether the use of the internet for health information is associated with health service utilization and whether the association is affected by specific health conditions. The study used data collected in a technology module from a nationally representative sample of community-dwelling older Americans aged 52 years and above from the 2012 Health and Retirement Study (HRS; N=991). Negative binomial regressions were used to examine the association between use of Web-based health information and the reported health service uses in 2012 and 2014. Analyses included additional covariates adjusting for predisposing, enabling, and need factors. Interactions between the use of the internet for health information and chronic health conditions were also tested. A total of 48.0% (476/991) of Americans aged 52 years and above reported using Web-based health information. The use of Web-based health information was positively associated with the concurrent reports of doctor visits, but not over 2 years. However, an interaction of using Web-based health information with diabetes showed that users had significantly fewer doctor visits compared with nonusers with diabetes at both times. The use of the internet for health information was associated with higher health service use at the concurrent time, but not at the subsequent time. The interaction between the use of the internet for health information and diabetes was significant at both time points, which suggests that health-related internet use may be associated with fewer doctor visits for certain chronic health conditions. Results provide some insight into how Web-based health information may provide an alternative health care resource for managing chronic conditions. ©Hyunju Shim, Jennifer Ailshire, Elizabeth Zelinski, Eileen Crimmins. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 25.05.2018.
Theory of two-point correlations of jet noise
NASA Technical Reports Server (NTRS)
Ribner, H. S.
1976-01-01
A large body of careful experimental measurements of two-point correlations of far field jet noise was carried out. The model of jet-noise generation is an approximate version of an earlier work of Ribner, based on the foundations of Lighthill. The model incorporates isotropic turbulence superimposed on a specified mean shear flow, with assumed space-time velocity correlations, but with source convection neglected. The particular vehicle is the Proudman format, and the previous work (mean-square pressure) is extended to display the two-point space-time correlations of pressure. The shape of polar plots of correlation is found to derive from two main factors: (1) the noncompactness of the source region, which allows differences in travel times to the two microphones - the dominant effect; (2) the directivities of the constituent quadrupoles - a weak effect. The noncompactness effect causes the directional lobes in a polar plot to have pointed tips (cusps) and to be especially narrow in the plane of the jet axis. In these respects, and in the quantitative shapes of the normalized correlation curves, results of the theory show generally good agreement with Maestrello's experimental measurements.
Pi, Zhi-bing; Tan, Guan-xian; Wang, Jun-lu
2007-07-17
To observe the effect of hydroxyethyl starch (HES) 130/0.4 on S100B protein level and cerebral metabolism of oxygen in open cardiac surgery under cardiopulmonary bypass (CPB) and to explore whether it has the protective effect of 6%HES130/0.4 as priming solution on cerebral injury during CPB and explore the probable mechanism. Forty patients with atrioseptal defect or ventricular septal defect scheduled for elective surgical repair under CPB with moderate hypothermia were randomly divided into two equal groups: HES 130/0.4 group (HES group) in which HES 130/0.4 (voluven) was used as priming solution and gelatin group (GRL group) in which gelofusine (succinylated gelatin) was used as priming solution. ECG, heart rate (HR), blood pressure (BP), mean arterial pressure (MAP), central venous pressure (CVP), arterial partial pressure of oxygen (P(a)O(2),), arterial partial pressure of carbon dioxide (P(et)CO(2)) and body temperature (naso-pharyngeal and rectal) were continuously monitored during the operation. Blood samples were obtained from the central vein for determination of blood concentrations of S100B protein at the following time points: before CPB (T(0)), 20 minutes after the beginning of CPB (T(1)), immediately after the termination of CPB (T(2)), 60 minutes after the termination of CPB (T(3)), and 24 hours after the termination of CPB (T(4)). The serum S100B protein levels were measured by ELISA. At the same time points blood samples were obtained from the jugular vein and radial artery to undergo blood gas analysis and measurement of blood glucose, based on which the cerebral oxygen metabolic rate/cerebral metabolic rate of glucose (CMRO(2)/CMR(GLU)) was calculated. Compared with the time point of immediately before CPB (T(0)), The S100B protein level of the 2 groups began to increase since the time point T(1), peaked at the time point T(2), began to decrease gradually since the time point T(3), and were still significantly higher than those before CPB at the time point T(4) (all P < 0.01), and the S100B protein levels at different time points of the HES group were all significantly lower than those of the GEL group (all P < 0.01). The S(jv)O(2) and CMRO(2)/CMR(GLU) levels of both groups increased at the time point T(1), decreased at the time points T(2) and T(3), and then restored to normal at the time points T(4). In the GEL group there were no significant differences in the levels between any 2 different time points, however, in the HES group S(jv)O(2) and CMRO(2)/CMR(GLU) levels at T(1) was significantly higher than those at the other time points (P < 0.05 or P < 0.01). S100B protein increases significantly in open cardiac surgery under CPB. HES130/0.4 lowers the S100B protein levels from the beginning of CPB to one hour after the termination of CPB with the probable mechanism of improving the cerebral metabolism of oxygen. 6%HES130/0.4 as priming solution may play a protective role in reduction of cerebral injury during CPB and open cardiac surgery.
Emotion Regulation Profiles, Temperament, and Adjustment Problems in Preadolescents
Zalewski, Maureen; Lengua, Liliana J.; Trancik, Anika; Wilson, Anna C.; Bazinet, Alissa
2014-01-01
The longitudinal relations of emotion regulation profiles to temperament and adjustment in a community sample of preadolescents (N = 196, 8–11 years at Time 1) were investigated using person-oriented latent profile analysis (LPA). Temperament, emotion regulation, and adjustment were measured at 3 different time points, with each time point occurring 1 year apart. LPA identified 5 frustration and 4 anxiety regulation profiles based on children’s physiological, behavioral, and self-reported reactions to emotion-eliciting tasks. The relation of effortful control to conduct problems was mediated by frustration regulation profiles, as was the relation of effortful control to depression. Anxiety regulation profiles did not mediate relations between temperament and adjustment. PMID:21413935
[Development of the automatic dental X-ray film processor].
Bai, J; Chen, H
1999-07-01
This paper introduces a multiple-point detecting technique of the density of dental X-ray films. With the infrared ray multiple-point detecting technique, a single-chip microcomputer control system is used to analyze the effectiveness of the film-developing in real time in order to achieve a good image. Based on the new technology, We designed the intelligent automatic dental X-ray film processing.
Efficient Analysis of Mass Spectrometry Data Using the Isotope Wavelet
NASA Astrophysics Data System (ADS)
Hussong, Rene; Tholey, Andreas; Hildebrandt, Andreas
2007-09-01
Mass spectrometry (MS) has become today's de-facto standard for high-throughput analysis in proteomics research. Its applications range from toxicity analysis to MS-based diagnostics. Often, the time spent on the MS experiment itself is significantly less than the time necessary to interpret the measured signals, since the amount of data can easily exceed several gigabytes. In addition, automated analysis is hampered by baseline artifacts, chemical as well as electrical noise, and an irregular spacing of data points. Thus, filtering techniques originating from signal and image analysis are commonly employed to address these problems. Unfortunately, smoothing, base-line reduction, and in particular a resampling of data points can affect important characteristics of the experimental signal. To overcome these problems, we propose a new family of wavelet functions based on the isotope wavelet, which is hand-tailored for the analysis of mass spectrometry data. The resulting technique is theoretically well-founded and compares very well with standard peak picking tools, since it is highly robust against noise spoiling the data, but at the same time sufficiently sensitive to detect even low-abundant peptides.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2003-01-01
Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)
Micromixer-based time-resolved NMR: applications to ubiquitin protein conformation.
Kakuta, Masaya; Jayawickrama, Dimuthu A; Wolters, Andrew M; Manz, Andreas; Sweedler, Jonathan V
2003-02-15
Time-resolved NMR spectroscopy is used to studychanges in protein conformation based on the elapsed time after a change in the solvent composition of a protein solution. The use of a micromixer and a continuous-flow method is described where the contents of two capillary flows are mixed rapidly, and then the NMR spectra of the combined flow are recorded at precise time points. The distance after mixing the two fluids and flow rates define the solvent-protein interaction time; this method allows the measurement of NMR spectra at precise mixing time points independent of spectral acquisition time. Integration of a micromixer and a microcoil NMR probe enables low-microliter volumes to be used without losing significant sensitivity in the NMR measurement. Ubiquitin, the model compound, changes its conformation from native to A-state at low pH and in 40% or higher methanol/water solvents. Proton NMR resonances of the His-68 and the Tyr-59 of ubiquitin are used to probe the conformational changes. Mixing ubiquitin and methanol solutions under low pH at microliter per minute flow rates yields both native and A-states. As the flow rate decreases, yielding longer reaction times, the population of the A-state increases. The micromixer-NMR system can probe reaction kinetics on a time scale of seconds.
Investigating the Accuracy of Point Clouds Generated for Rock Surfaces
NASA Astrophysics Data System (ADS)
Seker, D. Z.; Incekara, A. H.
2016-12-01
Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.
Slow Noncollinear Coulomb Scattering in the Vicinity of the Dirac Point in Graphene.
König-Otto, J C; Mittendorff, M; Winzer, T; Kadi, F; Malic, E; Knorr, A; Berger, C; de Heer, W A; Pashkin, A; Schneider, H; Helm, M; Winnerl, S
2016-08-19
The Coulomb scattering dynamics in graphene in energetic proximity to the Dirac point is investigated by polarization resolved pump-probe spectroscopy and microscopic theory. Collinear Coulomb scattering rapidly thermalizes the carrier distribution in k directions pointing radially away from the Dirac point. Our study reveals, however, that, in almost intrinsic graphene, full thermalization in all directions relying on noncollinear scattering is much slower. For low photon energies, carrier-optical-phonon processes are strongly suppressed and Coulomb mediated noncollinear scattering is remarkably slow, namely on a ps time scale. This effect is very promising for infrared and THz devices based on hot carrier effects.
Improvement on Timing Accuracy of LIDAR for Remote Sensing
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.
2018-05-01
The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.
Effect of Energy Drinks on Discoloration of Silorane and Dimethacrylate-Based Composite Resins.
Ahmadizenouz, Ghazaleh; Esmaeili, Behnaz; Ahangari, Zohreh; Khafri, Soraya; Rahmani, Aghil
2016-08-01
This study aimed to assess the effects of two energy drinks on color change (ΔE) of two methacrylate-based and a silorane-based composite resin after one week and one month. Thirty cubic samples were fabricated from Filtek P90, Filtek Z250 and Filtek Z350XT composite resins. All the specimens were stored in distilled water at 37°C for 24 hours. Baseline color values (L*a*b*) of each specimen were measured using a spectrophotometer according to the CIEL*a*b* color system. Ten randomly selected specimens from each composite were then immersed in the two energy drinks (Hype, Red Bull) and artificial saliva (control) for one week and one month. Color was re-assessed after each storage period and ΔE values were calculated. The data were analyzed using the Kruskal Wallis and Mann-Whitney U tests. Filtek Z250 composite showed the highest ΔE irrespective of the solutions at both time points. After seven days and one month, the lowest ΔE values were observed in Filtek Z350XT and Filtek P90 composites immersed in artificial saliva, respectively. The ΔE values of Filtek Z250 and Z350XT composites induced by Red Bull and Hype energy drinks were not significantly different. Discoloration of Filtek P90 was higher in Red Bull energy drink at both time points. Prolonged immersion time in all three solutions increased ΔE values of all composites. However, the ΔE values were within the clinically acceptable range (<3.3) at both time points.
Breakdown of Benford's law for birth data
NASA Astrophysics Data System (ADS)
Ausloos, M.; Herteliu, C.; Ileanu, B.
2015-02-01
Long birth time series for Romania are investigated from Benford's law point of view, distinguishing between families with a religious (Orthodox and Non-Orthodox) affiliation. The data extend from Jan. 01, 1905 till Dec. 31, 2001, i.e. over 97 years or 35 429 days. The results point to a drastic breakdown of Benford's law. Some interpretation is proposed, based on the statistical aspects due to population sizes, rather than on human thought constraints when the law breakdown is usually expected. Benford's law breakdown clearly points to natural causes.
Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.
Ji, Ming; Xiong, Chengjie; Grundman, Michael
2003-10-01
In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.
Student Mobility, Dosage, and Principal Stratification in School-Based RCTs
ERIC Educational Resources Information Center
Schochet, Peter Z.
2013-01-01
In school-based randomized control trials (RCTs), a common design is to follow student cohorts over time. For such designs, education researchers usually focus on the place-based (PB) impact parameter, which is estimated using data collected on all students enrolled in the study schools at each data collection point. A potential problem with this…
Flight Deck Surface Trajectory-Based Operations
NASA Technical Reports Server (NTRS)
Foyle, David C.; Hooey, Becky L.; Bakowski, Deborah L.
2017-01-01
Surface Trajectory-Based Operations (STBO) is a future concept for surface operations where time requirements are incorporated into taxi operations to support surface planning and coordination. Pilot-in-the-loop flight deck simulations have been conducted to study flight deck displays algorithms to aid pilots in complying with the time requirements of time-based taxi operations (i.e., at discrete locations in 3 12 D operations or at all points along the route in 4DT operations). The results of these studies (conformance, time-of-arrival error, eye-tracking data, and safety ratings) are presented. Flight deck simulation work done in collaboration with DLR is described. Flight deck research issues in future auto-taxi operations are also introduced.
Hoefer, Sebastian H; Sterz, Jasmina; Bender, Bernd; Stefanescu, Maria-Christina; Theis, Marius; Walcher, Felix; Sader, Robert; Ruesseler, Miriam
2017-03-28
Ensuring that all medical students achieve adequate clinical skills remains a challenge, yet the correct performance of clinical skills is critical for all fields of medicine. This study analyzes the influence of receiving feedback by teaching associates in the context of achieving and maintaining a level of expertise in complex head and skull examination. All third year students at a German university who completed the obligatory surgical skills lab training and surgical clerkship participated in this study. The students were randomized into two groups. lessons by an instructor and peer-based practical skills training. Intervention group: training by teaching associates who are examined as simulation patients and provided direct feedback on student performance. Their competency in short- and long-term competence (directly after intervention and at 4 months after the training) of head and skull examination was measured. Statistical analyses were performed using SPSS Statistics version 19 (IBM, Armonk, USA). Parametric and non-parametric test methods were applied. As a measurement of correlation, Pearson correlations and correlations via Kendall's-Tau-b were calculated and Cohen's d effect size was calculated. A total of 181 students were included (90 intervention, 91 control). Out of those 181 students 81 agreed to be videotaped (32 in the control group and 49 in the TA group) and examined at time point 1. At both time points, the intervention group performed the examination significantly better (time point 1, p = <.001; time point 2 (rater 1 p = .009, rater 2 p = .015), than the control group. The effect size (Cohens d) was up to 1.422. The use of teaching associates for teaching complex practical skills is effective for short- and long-term retention. We anticipate the method could be easily translated to nearly every patient-based clinical skill, particularly with regards to a competence-based education of future doctors.
Nucleic acid-based electrochemical nanobiosensors.
Abi, Alireza; Mohammadpour, Zahra; Zuo, Xiaolei; Safavi, Afsaneh
2018-04-15
The detection of biomarkers using sensitive and selective analytical devices is critically important for the early stage diagnosis and treatment of diseases. The synergy between the high specificity of nucleic acid recognition units and the great sensitivity of electrochemical signal transductions has already shown promise for the development of efficient biosensing platforms. Yet nucleic-acid based electrochemical biosensors often rely on target amplification strategies (e.g., polymerase chain reactions) to detect analytes at clinically relevant concentration ranges. The complexity and time-consuming nature of these amplification methods impede moving nucleic acid-based electrochemical biosensors from laboratory-based to point-of-care test settings. Fortunately, advancements in nanotechnology have provided growing evidence that the recruitment of nanoscaled materials and structures can enhance the biosensing performance (particularly in terms of sensitivity and response time) to the level suitable for use in point-of-care diagnostic tools. This Review highlights the significant progress in the field of nucleic acid-based electrochemical nanobiosensing with the focus on the works published during the last five years. Copyright © 2017. Published by Elsevier B.V.
Elting, L S; Rubenstein, E B; Rolston, K; Cantor, S B; Martin, C G; Kurtin, D; Rodriguez, S; Lam, T; Kanesan, K; Bodey, G
2000-11-01
To determine whether antibiotic regimens with similar rates of response differ significantly in the speed of response and to estimate the impact of this difference on the cost of febrile neutropenia. The time point of clinical response was defined by comparing the sensitivity, specificity, and predictive values of alternative objective and subjective definitions. Data from 488 episodes of febrile neutropenia, treated with either of two commonly used antibiotics (coded A or B) during six clinical trials, were pooled to compare the median time to clinical response, days of antibiotic therapy and hospitalization, and estimated costs. Response rates were similar; however, the median time to clinical response was significantly shorter with A-based regimens (5 days) compared with B-based regimens (7 days; P =.003). After 72 hours of therapy, 33% of patients who received A but only 18% of those who received B had responded (P =.01). These differences resulted in fewer days of antibiotic therapy and hospitalization with A-based regimens (7 and 9 days) compared with B-based regimens (9 and 12 days, respectively; P <.04) and in significantly lower estimated median costs ($8,491 v $11,133 per episode; P =.03). Early discharge at the time of clinical response should reduce the median cost from $10,752 to $8,162 (P <.001). Despite virtually identical rates of response, time to clinical response and estimated cost of care varied significantly among regimens. An early discharge strategy based on our definition of the time point of clinical response may further reduce the cost of treating non-low-risk patients with febrile neutropenia.
Prediction of the Critical Curvature for LX-17 with the Time of Arrival Data from DNS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Jin; Fried, Laurence E.; Moss, William C.
2017-01-10
We extract the detonation shock front velocity, curvature and acceleration from time of arrival data measured at grid points from direct numerical simulations of a 50mm rate-stick lit by a disk-source, with the ignition and growth reaction model and a JWL equation of state calibrated for LX-17. We compute the quasi-steady (D, κ) relation based on the extracted properties and predicted the critical curvatures of LX-17. We also proposed an explicit formula that contains the failure turning point, obtained from optimization for the (D, κ) relation of LX-17.
An embedded controller for a 7-degree of freedom prosthetic arm.
Tenore, Francesco; Armiger, Robert S; Vogelstein, R Jacob; Wenstrand, Douglas S; Harshbarger, Stuart D; Englehart, Kevin
2008-01-01
We present results from an embedded real-time hardware system capable of decoding surface myoelectric signals (sMES) to control a seven degree of freedom upper limb prosthesis. This is one of the first hardware implementations of sMES decoding algorithms and the most advanced controller to-date. We compare decoding results from the device to simulation results from a real-time PC-based operating system. Performance of both systems is shown to be similar, with decoding accuracy greater than 90% for the floating point software simulation and 80% for fixed point hardware and software implementations.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Mauk, Michael G.; Song, Jinzhao; Liu, Changchun; Bau, Haim H.
2018-01-01
Designs and applications of microfluidics-based devices for molecular diagnostics (Nucleic Acid Amplification Tests, NAATs) in infectious disease testing are reviewed, with emphasis on minimally instrumented, point-of-care (POC) tests for resource-limited settings. Microfluidic cartridges (‘chips’) that combine solid-phase nucleic acid extraction; isothermal enzymatic nucleic acid amplification; pre-stored, paraffin-encapsulated lyophilized reagents; and real-time or endpoint optical detection are described. These chips can be used with a companion module for separating plasma from blood through a combined sedimentation-filtration effect. Three reporter types: Fluorescence, colorimetric dyes, and bioluminescence; and a new paradigm for end-point detection based on a diffusion-reaction column are compared. Multiplexing (parallel amplification and detection of multiple targets) is demonstrated. Low-cost detection and added functionality (data analysis, control, communication) can be realized using a cellphone platform with the chip. Some related and similar-purposed approaches by others are surveyed. PMID:29495424
Wardlaw, Bruce R.; Ellwood, Brooks B.; Lambert, Lance L.; Tomkin, Jonathan H.; Bell, Gordon L.; Nestell, Galina P.
2012-01-01
Here we establish a magnetostratigraphy susceptibility zonation for the three Middle Permian Global boundary Stratotype Sections and Points (GSSPs) that have recently been defined, located in Guadalupe Mountains National Park, West Texas, USA. These GSSPs, all within the Middle Permian Guadalupian Series, define (1) the base of the Roadian Stage (base of the Guadalupian Series), (2) the base of the Wordian Stage and (3) the base of the Capitanian Stage. Data from two additional stratigraphic successions in the region, equivalent in age to the Kungurian–Roadian and Wordian–Capitanian boundary intervals, are also reported. Based on low-field, mass specific magnetic susceptibility (χ) measurements of 706 closely spaced samples from these stratigraphic sections and time-series analysis of one of these sections, we (1) define the magnetostratigraphy susceptibility zonation for the three Guadalupian Series Global boundary Stratotype Sections and Points; (2) demonstrate that χ datasets provide a proxy for climate cyclicity; (3) give quantitative estimates of the time it took for some of these sediments to accumulate; (4) give the rates at which sediments were accumulated; (5) allow more precise correlation to equivalent sections in the region; (6) identify anomalous stratigraphic horizons; and (7) give estimates for timing and duration of geological events within sections.
Garcia-Vicente, Ana María; Pérez-Beteta, Julián; Pérez-García, Víctor Manuel; Molina, David; Jiménez-Londoño, German Andrés; Soriano-Castrejón, Angel; Martínez-González, Alicia
2017-08-01
The aim of the study was to investigate the influence of dual time point 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) positron emission tomography/x-ray computed tomography (PET/CT) on the standard uptake value (SUV) and volume-based metabolic variables of breast lesions and their relation with biological characteristics and molecular phenotypes. Retrospective analysis including 67 patients with locally advanced breast cancer (LABC). All patients underwent a dual time point [ 18 F]FDG PET/CT, 1 h (PET-1) and 3 h (PET-2) after [ 18 F]FDG administration. Tumors were segmented following a three-dimensional methodology. Semiquantitative metabolic variables (SUV max , SUV mean , and SUV peak ) and volume-based variables (metabolic tumor volume, MTV, and total lesion glycolysis, TLG) were obtained. Biologic prognostic parameters, such as the hormone receptors status, p53, HER2 expression, proliferation rate (Ki-67), and grading were obtained. Molecular phenotypes and risk-classification [low: luminal A, intermediate: luminal B HER2 (-) or luminal B HER2 (+), and high: HER2 pure or triple negative] were established. Relations between clinical and biological variables with the metabolic parameters were studied. The relevance of each metabolic variable in the prediction of phenotype risk was assessed using a multivariate analysis. SUV-based variables and TLG obtained in the PET-1 and PET-2 showed high and significant correlations between them. MTV and SUV variables (SUV max , SUV mean , and SUV peak ) where only marginally correlated. Significant differences were found between mean SUV variables and TLG obtained in PET-1 and PET-2. High and significant associations were found between metabolic variables obtained in PET-1 and their homonymous in PET-2. Based on that, only relations of PET-1 variables with biological tumor characteristics were explored. SUV variables showed associations with hormone receptors status (p < 0.001 and p = 0.001 for estrogen and progesterone receptor, respectively) and risk-classification according to phenotype (SUV max , p = 0.003; SUV mean , p = 0.004; SUV peak , p = 0.003). As to volume-based variables, only TLG showed association with hormone receptors status (estrogen, p < 0.001; progesterone, p = 0.031), risk-classification (p = 0.007), and grade (p = 0.036). Hormone receptor negative tumors, high-grade tumors, and high-risk phenotypes showed higher TLG values. No association was found between the metabolic variables and Ki-67, HER2, or p53 expression. Statistical differences were found between mean SUV-based variables and TLG obtained in the dual time point PET/CT. Most of PET-derived parameters showed high association with molecular factors of breast cancer. However, dual time point PET/CT did not offer any added value to the single PET acquisition with respect to the relations with biological variables, based on PET-1 SUV, and volume-based variables were predictors of those obtained in PET-2.
Time as a dimension of the sample design in national-scale forest inventories
Francis Roesch; Paul Van Deusen
2013-01-01
Historically, the goal of forest inventories has been to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design was with selection probabilities based on land area observed at a discrete point in time. Time was not...
Coastal geology and recent origins for Sand Point, Lake Superior
Fisher, Timothy G.; Krantz, David E.; Castaneda, Mario R.; Loope, Walter L.; Jol, Harry M.; Goble, Ronald J.; Higley, Melinda C.; DeWald, Samantha; Hansen, Paul
2014-01-01
Sand Point is a small cuspate foreland located along the southeastern shore of Lake Superior within Pictured Rocks National Lakeshore near Munising, Michigan. Park managers’ concerns for the integrity of historic buildings at the northern periphery of the point during the rising lake levels in the mid-1980s greatly elevated the priority of research into the geomorphic history and age of Sand Point. To pursue this priority, we recovered sediment cores from four ponds on Sand Point, assessed subsurface stratigraphy onshore and offshore using geophysical techniques, and interpreted the chronology of events using radiocarbon and luminescence dating. Sand Point formed at the southwest edge of a subaqueous platform whose base is probably constructed of glacial diamicton and outwash. During the post-glacial Nipissing Transgression, the base was mantled with sand derived from erosion of adjacent sandstone cliffs. An aerial photograph time sequence, 1939–present, shows that the periphery of the platform has evolved considerably during historical time, infl uenced by transport of sediment into adjacent South Bay. Shallow seismic refl ections suggest slump blocks along the leading edge of the platform. Light detection and ranging (LiDAR) and shallow seismic refl ections to the northwest of the platform reveal large sand waves within a deep (12 m) channel produced by currents fl owing episodically to the northeast into Lake Superior. Ground-penetrating radar profi les show transport and deposition of sand across the upper surface of the platform. Basal radiocarbon dates from ponds between subaerial beach ridges range in age from 540 to 910 cal yr B.P., suggesting that Sand Point became emergent during the last ~1000 years, upon the separation of Lake Superior from Lakes Huron and Michigan. However, optically stimulated luminescence (OSL) ages from the beach ridges were two to three times as old as the radiocarbon ages, implying that emergence of Sand Point may have begun earlier, ~2000 years ago. The age discrepancy appears to be the result of incomplete bleaching of the quartz grains and an exceptionally low paleodose rate for the OSL samples. Given the available data, the younger ages from the radiocarbon analyses are preferred, but further work is necessary to test the two age models.
Motion data classification on the basis of dynamic time warping with a cloud point distance measure
NASA Astrophysics Data System (ADS)
Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad
2016-06-01
The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.
A 4DCT imaging-based breathing lung model with relative hysteresis
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2016-01-01
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811
A 4DCT imaging-based breathing lung model with relative hysteresis
NASA Astrophysics Data System (ADS)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2016-12-01
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.
Automated time activity classification based on global positioning system (GPS) tracking data
2011-01-01
Background Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. Methods We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Results Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust than the rule-based model under the condition of biased or poor quality training data. Conclusions Our models can successfully identify indoor and in-vehicle travel points from the raw GPS data, but challenges remain in developing models to distinguish outdoor static points and walking. Accurate training data are essential in developing reliable models in classifying time-activity patterns. PMID:22082316
Automated time activity classification based on global positioning system (GPS) tracking data.
Wu, Jun; Jiang, Chengsheng; Houston, Douglas; Baker, Dean; Delfino, Ralph
2011-11-14
Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust than the rule-based model under the condition of biased or poor quality training data. Our models can successfully identify indoor and in-vehicle travel points from the raw GPS data, but challenges remain in developing models to distinguish outdoor static points and walking. Accurate training data are essential in developing reliable models in classifying time-activity patterns.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
Automated planning of ablation targets in atrial fibrillation treatment
NASA Astrophysics Data System (ADS)
Keustermans, Johannes; De Buck, Stijn; Heidbüchel, Hein; Suetens, Paul
2011-03-01
Catheter based radio-frequency ablation is used as an invasive treatment of atrial fibrillation. This procedure is often guided by the use of 3D anatomical models obtained from CT, MRI or rotational angiography. During the intervention the operator accurately guides the catheter to prespecified target ablation lines. The planning stage, however, can be time consuming and operator dependent which is suboptimal both from a cost and health perspective. Therefore, we present a novel statistical model-based algorithm for locating ablation targets from 3D rotational angiography images. Based on a training data set of 20 patients, consisting of 3D rotational angiography images with 30 manually indicated ablation points, a statistical local appearance and shape model is built. The local appearance model is based on local image descriptors to capture the intensity patterns around each ablation point. The local shape model is constructed by embedding the ablation points in an undirected graph and imposing that each ablation point only interacts with its neighbors. Identifying the ablation points on a new 3D rotational angiography image is performed by proposing a set of possible candidate locations for each ablation point, as such, converting the problem into a labeling problem. The algorithm is validated using a leave-one-out-approach on the training data set, by computing the distance between the ablation lines obtained by the algorithm and the manually identified ablation points. The distance error is equal to 3.8+/-2.9 mm. As ablation lesion size is around 5-7 mm, automated planning of ablation targets by the presented approach is sufficiently accurate.
Data center thermal management
Hamann, Hendrik F.; Li, Hongfei
2016-02-09
Historical high-spatial-resolution temperature data and dynamic temperature sensor measurement data may be used to predict temperature. A first formulation may be derived based on the historical high-spatial-resolution temperature data for determining a temperature at any point in 3-dimensional space. The dynamic temperature sensor measurement data may be calibrated based on the historical high-spatial-resolution temperature data at a corresponding historical time. Sensor temperature data at a plurality of sensor locations may be predicted for a future time based on the calibrated dynamic temperature sensor measurement data. A three-dimensional temperature spatial distribution associated with the future time may be generated based on the forecasted sensor temperature data and the first formulation. The three-dimensional temperature spatial distribution associated with the future time may be projected to a two-dimensional temperature distribution, and temperature in the future time for a selected space location may be forecasted dynamically based on said two-dimensional temperature distribution.
Suh, Sooyeon; Kim, Hyun; Yang, Hae-Chung; Cho, Eo Rin; Lee, Seung Ku; Shin, Chol
2013-01-01
Study Objective: This is a population-based longitudinal study that followed insomnia symptoms over a 6-year period in non-depressed individuals. The purpose of the study was to (1) investigate the longitudinal course of depression based on number of insomnia episodes; and (2) describe longitudinal associations between insomnia and depression, and insomnia and suicidal ideation. Design: Population-based longitudinal study. Setting: Community-based sample from the Korean Genome and Epidemiology Study (KoGES). Participants: 1,282 non-depressed individuals (44% male, mean age 52.3 ± 7.14 years) Measurements and Results: This study prospectively assessed insomnia, depression, and suicidal ideation with 4 time points. Individuals were classified into no insomnia (NI), single episode insomnia (SEI), and persistent insomnia (PI; ≥ insomnia at 2+ time points) groups based on number of times insomnia was indicated. Mixed effects modeling indicated that depression scores increased significantly faster in the PI group compared to the NI (P < 0.001) and SEI (P = 0.02) groups. Additionally, the PI group had significantly increased odds of depression as compared to NI or SEI (OR 2.44, P = 0.001) groups, with 18.7% meeting criteria for depression compared to the NI (5.3%) and SEI (11.6%) groups at end point. The PI group also had significantly increased odds of suicidal ideation as compared to NI or SEI (OR 1.86, P = 0.002) groups. Conclusions: Persistent insomnia significantly increases the rate in which depression occurs over time in non-depressed individuals, which ultimately leads to higher risk for depression. Additionally, having persistent insomnia also increased the risk of suicidal ideation. Citation: Suh S; Kim H; Yang HC; Cho ER; Lee SK; Shin C. Longitudinal course of depression scores with and without insomnia in non-depressed individuals: a 6-year follow-up longitudinal study in a Korean cohort. SLEEP 2013;36(3):369-376. PMID:23449814
Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng
2014-01-01
In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC.
Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng
2014-01-01
In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818
Liu, Zitao; Hauskrecht, Milos
2017-11-01
Building of an accurate predictive model of clinical time series for a patient is critical for understanding of the patient condition, its dynamics, and optimal patient management. Unfortunately, this process is not straightforward. First, patient-specific variations are typically large and population-based models derived or learned from many different patients are often unable to support accurate predictions for each individual patient. Moreover, time series observed for one patient at any point in time may be too short and insufficient to learn a high-quality patient-specific model just from the patient's own data. To address these problems we propose, develop and experiment with a new adaptive forecasting framework for building multivariate clinical time series models for a patient and for supporting patient-specific predictions. The framework relies on the adaptive model switching approach that at any point in time selects the most promising time series model out of the pool of many possible models, and consequently, combines advantages of the population, patient-specific and short-term individualized predictive models. We demonstrate that the adaptive model switching framework is very promising approach to support personalized time series prediction, and that it is able to outperform predictions based on pure population and patient-specific models, as well as, other patient-specific model adaptation strategies.
Barak-Shinar, Deganit; Green, Lawrence J
2018-01-01
Objective: The aim of this study was to evaluate the safety and efficacy of an herbal and zinc pyrithione shampoo and a scalp lotion (Kamedis Derma-Scalp Dandruff Therapy, Kamedis Ltd., Tel Aviv, Israel) for the treatment of scalp seborrheic dermatitis and dandruff. Design: This was an interventional, open-label, safety and efficacy study. Setting: This open-label study was conducted at Consumer Product Testing Company Inc. in Fairfield, New Jersey. At the baseline visit (Day 0), an examination of the scalp was conducted by a board-certified dermatologist. The entire scalp was evaluated for evidence of seborrheic dermatitis using the Adherent Scalp Flaking Score with a 10-point scale. Only subjects with evidence of moderate-to-greater seborrheic dermatitis or moderate-to-greater dandruff were deemed qualified for inclusion in the study. Participants: Fifty subjects were recruited and included in the study. Measurements: Study subjects were evaluated by the same dermatologist for erythema and flaking at Days 0, 14, 28, and 42 using a five-point scale for each parameter. At each time point, a total severity score was calculated based on the findings of the evaluations. Following the scalp evaluation, each subject had a standardized digital photograph taken of his or her scalp. Each subject was also asked to answer a satisfaction questionnaire regarding the product treatment enhancement and characteristics. Results: A reduction in both parameters evaluated was seen at all time points. Statistical significance was achieved at each time point when compared with the baseline visit. In addition, the subjects expressed a high degree of satisfaction with the treatment. No adverse events were reported during this study. Conclusion: The study showed that the herbal zinc pyrithione shampoo and scalp lotion provided improvement in the main symptoms of seborrheic dermatitis.
Robust curb detection with fusion of 3D-Lidar and camera data.
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-05-21
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.
Brownian dynamics of confined rigid bodies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delong, Steven; Balboa Usabiaga, Florencio; Donev, Aleksandar, E-mail: donev@courant.nyu.edu
2015-10-14
We introduce numerical methods for simulating the diffusive motion of rigid bodies of arbitrary shape immersed in a viscous fluid. We parameterize the orientation of the bodies using normalized quaternions, which are numerically robust, space efficient, and easy to accumulate. We construct a system of overdamped Langevin equations in the quaternion representation that accounts for hydrodynamic effects, preserves the unit-norm constraint on the quaternion, and is time reversible with respect to the Gibbs-Boltzmann distribution at equilibrium. We introduce two schemes for temporal integration of the overdamped Langevin equations of motion, one based on the Fixman midpoint method and the othermore » based on a random finite difference approach, both of which ensure that the correct stochastic drift term is captured in a computationally efficient way. We study several examples of rigid colloidal particles diffusing near a no-slip boundary and demonstrate the importance of the choice of tracking point on the measured translational mean square displacement (MSD). We examine the average short-time as well as the long-time quasi-two-dimensional diffusion coefficient of a rigid particle sedimented near a bottom wall due to gravity. For several particle shapes, we find a choice of tracking point that makes the MSD essentially linear with time, allowing us to estimate the long-time diffusion coefficient efficiently using a Monte Carlo method. However, in general, such a special choice of tracking point does not exist, and numerical techniques for simulating long trajectories, such as the ones we introduce here, are necessary to study diffusion on long time scales.« less
Proton radiography and proton computed tomography based on time-resolved dose measurements
NASA Astrophysics Data System (ADS)
Testa, Mauro; Verburg, Joost M.; Rose, Mark; Min, Chul Hee; Tang, Shikui; Hassane Bentefour, El; Paganetti, Harald; Lu, Hsiao-Ming
2013-11-01
We present a proof of principle study of proton radiography and proton computed tomography (pCT) based on time-resolved dose measurements. We used a prototype, two-dimensional, diode-array detector capable of fast dose rate measurements, to acquire proton radiographic images expressed directly in water equivalent path length (WEPL). The technique is based on the time dependence of the dose distribution delivered by a proton beam traversing a range modulator wheel in passive scattering proton therapy systems. The dose rate produced in the medium by such a system is periodic and has a unique pattern in time at each point along the beam path and thus encodes the WEPL. By measuring the time dose pattern at the point of interest, the WEPL to this point can be decoded. If one measures the time-dose patterns at points on a plane behind the patient for a beam with sufficient energy to penetrate the patient, the obtained 2D distribution of the WEPL forms an image. The technique requires only a 2D dosimeter array and it uses only the clinical beam for a fraction of second with negligible dose to patient. We first evaluated the accuracy of the technique in determining the WEPL for static phantoms aiming at beam range verification of the brain fields of medulloblastoma patients. Accurate beam ranges for these fields can significantly reduce the dose to the cranial skin of the patient and thus the risk of permanent alopecia. Second, we investigated the potential features of the technique for real-time imaging of a moving phantom. Real-time tumor tracking by proton radiography could provide more accurate validations of tumor motion models due to the more sensitive dependence of proton beam on tissue density compared to x-rays. Our radiographic technique is rapid (˜100 ms) and simultaneous over the whole field, it can image mobile tumors without the problem of interplay effect inherently challenging for methods based on pencil beams. Third, we present the reconstructed pCT images of a cylindrical phantom containing inserts of different materials. As for all conventional pCT systems, the method illustrated in this work produces tomographic images that are potentially more accurate than x-ray CT in providing maps of proton relative stopping power (RSP) in the patient without the need for converting x-ray Hounsfield units to proton RSP. All phantom tests produced reasonable results, given the currently limited spatial and time resolution of the prototype detector. The dose required to produce one radiographic image, with the current settings, is ˜0.7 cGy. Finally, we discuss a series of techniques to improve the resolution and accuracy of radiographic and tomographic images for the future development of a full-scale detector.
2014-01-01
Background Plasmodium falciparum transmission has decreased significantly in Zambia in the last decade. The malaria transmission is influenced by environmental variables. Incorporation of environmental variables in models of malaria transmission likely improves model fit and predicts probable trends in malaria disease. This work is based on the hypothesis that remotely-sensed environmental factors, including nocturnal dew point, are associated with malaria transmission and sustain foci of transmission during the low transmission season in the Southern Province of Zambia. Methods Thirty-eight rural health centres in Southern Province, Zambia were divided into three zones based on transmission patterns. Correlations between weekly malaria cases and remotely-sensed nocturnal dew point, nocturnal land surface temperature as well as vegetation indices and rainfall were evaluated in time-series analyses from 2012 week 19 to 2013 week 36. Zonal as well as clinic-based, multivariate, autoregressive, integrated, moving average (ARIMAX) models implementing environmental variables were developed to model transmission in 2011 week 19 to 2012 week 18 and forecast transmission in 2013 week 37 to week 41. Results During the dry, low transmission season significantly higher vegetation indices, nocturnal land surface temperature and nocturnal dew point were associated with the areas of higher transmission. Environmental variables improved ARIMAX models. Dew point and normalized differentiated vegetation index were significant predictors and improved all zonal transmission models. In the high-transmission zone, this was also seen for land surface temperature. Clinic models were improved by adding dew point and land surface temperature as well as normalized differentiated vegetation index. The mean average error of prediction for ARIMAX models ranged from 0.7 to 33.5%. Forecasts of malaria incidence were valid for three out of five rural health centres; however, with poor results at the zonal level. Conclusions In this study, the fit of ARIMAX models improves when environmental variables are included. There is a significant association of remotely-sensed nocturnal dew point with malaria transmission. Interestingly, dew point might be one of the factors sustaining malaria transmission in areas of general aridity during the dry season. PMID:24927747
Nygren, David; Stoyanov, Cristina; Lewold, Clemens; Månsson, Fredrik; Miller, John; Kamanga, Aniset; Shiff, Clive J
2014-06-13
Plasmodium falciparum transmission has decreased significantly in Zambia in the last decade. The malaria transmission is influenced by environmental variables. Incorporation of environmental variables in models of malaria transmission likely improves model fit and predicts probable trends in malaria disease. This work is based on the hypothesis that remotely-sensed environmental factors, including nocturnal dew point, are associated with malaria transmission and sustain foci of transmission during the low transmission season in the Southern Province of Zambia. Thirty-eight rural health centres in Southern Province, Zambia were divided into three zones based on transmission patterns. Correlations between weekly malaria cases and remotely-sensed nocturnal dew point, nocturnal land surface temperature as well as vegetation indices and rainfall were evaluated in time-series analyses from 2012 week 19 to 2013 week 36. Zonal as well as clinic-based, multivariate, autoregressive, integrated, moving average (ARIMAX) models implementing environmental variables were developed to model transmission in 2011 week 19 to 2012 week 18 and forecast transmission in 2013 week 37 to week 41. During the dry, low transmission season significantly higher vegetation indices, nocturnal land surface temperature and nocturnal dew point were associated with the areas of higher transmission. Environmental variables improved ARIMAX models. Dew point and normalized differentiated vegetation index were significant predictors and improved all zonal transmission models. In the high-transmission zone, this was also seen for land surface temperature. Clinic models were improved by adding dew point and land surface temperature as well as normalized differentiated vegetation index. The mean average error of prediction for ARIMAX models ranged from 0.7 to 33.5%. Forecasts of malaria incidence were valid for three out of five rural health centres; however, with poor results at the zonal level. In this study, the fit of ARIMAX models improves when environmental variables are included. There is a significant association of remotely-sensed nocturnal dew point with malaria transmission. Interestingly, dew point might be one of the factors sustaining malaria transmission in areas of general aridity during the dry season.
NASA Astrophysics Data System (ADS)
Ould Bachir, Tarek
The real-time simulation of electrical networks gained a vivid industrial interest during recent years, motivated by the substantial development cost reduction that such a prototyping approach can offer. Real-time simulation allows the progressive inclusion of real hardware during its development, allowing its testing under realistic conditions. However, CPU-based simulations suffer from certain limitations such as the difficulty to reach time-steps of a few microsecond, an important challenge brought by modern power converters. Hence, industrial practitioners adopted the FPGA as a platform of choice for the implementation of calculation engines dedicated to the rapid real-time simulation of electrical networks. The reconfigurable technology broke the 5 kHz switching frequency barrier that is characteristic of CPU-based simulations. Moreover, FPGA-based real-time simulation offers many advantages, including the reduced latency of the simulation loop that is obtained thanks to a direct access to sensors and actuators. The fixed-point format is paradigmatic to FPGA-based digital signal processing. However, the format imposes a time penalty in the development process since the designer has to asses the required precision for all model variables. This fact brought an import research effort on the use of the floating-point format for the simulation of electrical networks. One of the main challenges in the use of the floating-point format are the long latencies required by the elementary arithmetic operators, particularly when an adder is used as an accumulator, an important building bloc for the implementation of integration rules such as the trapezoidal method. Hence, single-cycle floating-point accumulation forms the core of this research work. Our results help building such operators as accumulators, multiply-accumulators (MACs), and dot-product (DP) operators. These operators play a key role in the implementation of the proposed calculation engines. Therefore, this thesis contributes to the realm of FPGA-based real-time simulation in many ways. The research work proposes a new summation algorithm, which is a generalization of the so-called self-alignment technique. The new formulation is broader, simpler in its expression and hardware implementation. Our research helps formulating criteria to guarantee good accuracy, the criteria being established on a theoretical, as well as empirical basis. Moreover, the thesis offers a comprehensive analysis on the use of the redundant high radix carry-save (HRCS) format. The HRCS format is used to perform rapid additions of large mantissas. Two new HRCS operators are also proposed, namely an endomorphic adder and a HRCS to conventional converter. Once the mean to single-cycle accumulation is defined as a combination of the self-alignment technique and the HRCS format, the research focuses on the FPGA implementation of SIMD calculation engines using parallel floating-point MACs or DPs. The proposed operators are characterized by low latencies, allowing the engines to reach very low time-steps. The document finally discusses power electronic circuits modelling, and concludes with the presentation of a versatile calculation engine capable of simulating power converter with arbitrary topologies and up to 24 switches, while achieving time steps below 1 mus and allowing switching frequencies in the range of tens kilohertz. The latter realization has led to commercialization of a product by our industrial partner.
Partitioning of functional gene expression data using principal points.
Kim, Jaehee; Kim, Haseong
2017-10-12
DNA microarrays offer motivation and hope for the simultaneous study of variations in multiple genes. Gene expression is a temporal process that allows variations in expression levels with a characterized gene function over a period of time. Temporal gene expression curves can be treated as functional data since they are considered as independent realizations of a stochastic process. This process requires appropriate models to identify patterns of gene functions. The partitioning of the functional data can find homogeneous subgroups of entities for the massive genes within the inherent biological networks. Therefor it can be a useful technique for the analysis of time-course gene expression data. We propose a new self-consistent partitioning method of functional coefficients for individual expression profiles based on the orthonormal basis system. A principal points based functional partitioning method is proposed for time-course gene expression data. The method explores the relationship between genes using Legendre coefficients as principal points to extract the features of gene functions. Our proposed method provides high connectivity in connectedness after clustering for simulated data and finds a significant subsets of genes with the increased connectivity. Our approach has comparative advantages that fewer coefficients are used from the functional data and self-consistency of principal points for partitioning. As real data applications, we are able to find partitioned genes through the gene expressions found in budding yeast data and Escherichia coli data. The proposed method benefitted from the use of principal points, dimension reduction, and choice of orthogonal basis system as well as provides appropriately connected genes in the resulting subsets. We illustrate our method by applying with each set of cell-cycle-regulated time-course yeast genes and E. coli genes. The proposed method is able to identify highly connected genes and to explore the complex dynamics of biological systems in functional genomics.
Quasicrystals and Quantum Computing
NASA Astrophysics Data System (ADS)
Berezin, Alexander A.
1997-03-01
In Quantum (Q) Computing qubits form Q-superpositions for macroscopic times. One scheme for ultra-fast (Q) computing can be based on quasicrystals. Ultrafast processing in Q-coherent structures (and the very existence of durable Q-superpositions) may be 'consequence' of presence of entire manifold of integer arithmetic (A0, aleph-naught of Georg Cantor) at any 4-point of space-time, furthermore, at any point of any multidimensional phase space of (any) N-particle Q-system. The latter, apart from quasicrystals, can include dispersed and/or diluted systems (Berezin, 1994). In such systems such alleged centrepieces of Q-Computing as ability for fast factorization of long integers can be processed by sheer virtue of the fact that entire infinite pattern of prime numbers is instantaneously available as 'free lunch' at any instant/point. Infinitely rich pattern of A0 (including pattern of primes and almost primes) acts as 'independent' physical effect which directly generates Q-dynamics (and physical world) 'out of nothing'. Thus Q-nonlocality can be ultimately based on instantaneous interconnectedness through ever- the-same structure of A0 ('Platonic field' of integers).
Capturing rogue waves by multi-point statistics
NASA Astrophysics Data System (ADS)
Hadjihosseini, A.; Wächter, Matthias; Hoffmann, N. P.; Peinke, J.
2016-01-01
As an example of a complex system with extreme events, we investigate ocean wave states exhibiting rogue waves. We present a statistical method of data analysis based on multi-point statistics which for the first time allows the grasping of extreme rogue wave events in a highly satisfactory statistical manner. The key to the success of the approach is mapping the complexity of multi-point data onto the statistics of hierarchically ordered height increments for different time scales, for which we can show that a stochastic cascade process with Markov properties is governed by a Fokker-Planck equation. Conditional probabilities as well as the Fokker-Planck equation itself can be estimated directly from the available observational data. With this stochastic description surrogate data sets can in turn be generated, which makes it possible to work out arbitrary statistical features of the complex sea state in general, and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics.
Feature-based registration of historical aerial images by Area Minimization
NASA Astrophysics Data System (ADS)
Nagarajan, Sudhagar; Schenk, Toni
2016-06-01
The registration of historical images plays a significant role in assessing changes in land topography over time. By comparing historical aerial images with recent data, geometric changes that have taken place over the years can be quantified. However, the lack of ground control information and precise camera parameters has limited scientists' ability to reliably incorporate historical images into change detection studies. Other limitations include the methods of determining identical points between recent and historical images, which has proven to be a cumbersome task due to continuous land cover changes. Our research demonstrates a method of registering historical images using Time Invariant Line (TIL) features. TIL features are different representations of the same line features in multi-temporal data without explicit point-to-point or straight line-to-straight line correspondence. We successfully determined the exterior orientation of historical images by minimizing the area formed between corresponding TIL features in recent and historical images. We then tested the feasibility of the approach with synthetic and real data and analyzed the results. Based on our analysis, this method shows promise for long-term 3D change detection studies.
Performance analysis of a dual-tree algorithm for computing spatial distance histograms
Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni
2011-01-01
Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753
Rojo-Manaute, Jose Manuel; Capa-Grasa, Alberto; Del Cerro-Gutiérrez, Miguel; Martínez, Manuel Villanueva; Chana-Rodríguez, Francisco; Martín, Javier Vaquero
2012-03-01
Trigger digit surgery can be performed by an open approach using classic open surgery, by a wide-awake approach, or by sonographically guided first annular pulley release in day surgery and office-based ambulatory settings. Our goal was to perform a turnover and economic analysis of 3 surgical models. Two studies were conducted. The first was a turnover analysis of 57 patients allocated 4:4:1 into the surgical models: sonographically guided-office-based, classic open-day surgery, and wide-awake-office-based. Regression analysis for the turnover time was monitored for assessing stability (R(2) < .26). Second, on the basis of turnover times and hospital tariff revenues, we calculated the total costs, income to cost ratio, opportunity cost, true cost, true net income (primary variable), break-even points for sonographically guided fixed costs, and 1-way analysis for identifying thresholds among alternatives. Thirteen sonographically guided-office-based patients were withdrawn because of a learning curve influence. The wide-awake (n = 6) and classic (n = 26) models were compared to the last 25% of the sonographically guided group (n = 12), which showed significantly less mean turnover times, income to cost ratios 2.52 and 10.9 times larger, and true costs 75.48 and 20.92 times lower, respectively. A true net income break-even point happened after 19.78 sonographically guided-office-based procedures. Sensitivity analysis showed a threshold between wide-awake and last 25% sonographically guided true costs if the last 25% sonographically guided turnover times reached 65.23 and 27.81 minutes, respectively. However, this trial was underpowered. This trial comparing surgical models was underpowered and is inconclusive on turnover times; however, the sonographically guided-office-based approach showed shorter turnover times and better economic results with a quick recoup of the costs of sonographically assisted surgery.
Zhou, Jian-Li; Xing, Jun; Liu, Cong-Hui; Wen, Jie; Zhao, Nan-Nan; Kang, Yuan-Yuan; Shao, Ting
2018-05-01
With the improvement of living standard, gestational diabetes mellitus (GDM) incidence is increasing every year. We observed the effects of abnormal 75 g oral glucose tolerance test (OGTT) at different time points on neonatal complications and neurobehavioral development in GDM.A total of 144 newborns whose mothers were diagnosed with GDM and received prenatal examination and childbirth in our hospital from October 2015 to April 2016, were observed in this study. Pregnant women underwent 75 g OGTT and the blood glucose level was recorded on an empty stomach, as well as postprandial 1 and 2 hours, respectively. Based on the frequency of 75 g OGTT-abnormal time points, the pregnant women were divided into group 1 (OGTT abnormality at 1 time point), group 2 (OGTT abnormality at 2 time points), and group 3 (OGTT abnormality at 3 time points). Neonatal behavioral neurological assessment (NBNA) was performed on the 3 groups, respectively.In the total score of NBNA, there was a significant difference among the 3 groups (F = 17.120, P = .000), and there were significant differences between the 3 groups (all P < .05). The incidence of neonatal hypoglycemia was significantly lower in groups 1 and 2 than in group 3, and the incidence of macrosomia was significantly lower in groups 1 than in groups 2 and 3 (all P < .05). In the 144 newborns, NBNA scoring was significantly lower in the newborns with hypoglycemia than in the newborns with normal blood glucose level, and in macrosomia than in the newborns with normal body weight (all P < .01).With the increase of OGTT-abnormal time points in the pregnant women with GDM, the incidences of neonatal hypoglycemia and macrosomia rise and neonatal NBNA score decreases. Therefore, reasonable measures should be adopted as early as possible to prevent poor prognosis in the pregnant women with GDM.
ERIC Educational Resources Information Center
Bohanon, Hank; Fenning, Pamela; Hicks, Kira; Weber, Stacey; Thier, Kimberly; Aikins, Brigit; Morrissey, Kelly; Briggs, Alissa; Bartucci, Gina; McArdle, Lauren; Hoeper, Lisa; Irvin, Larry
2012-01-01
The purpose of this case study was to expand the literature base regarding the application of high school schoolwide positive behavior support in an urban setting for practitioners and policymakers to address behavior issues. In addition, the study describes the use of the Change Point Test as a method for analyzing time series data that are…
Advanced Mobility Handover for Mobile IPv6 Based Wireless Networks
Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an Advanced Mobility Handover scheme (AMH) in this paper for seamless mobility in MIPv6-based wireless networks. In the proposed scheme, the mobile node utilizes a unique home IPv6 address developed to maintain communication with other corresponding nodes without a care-of-address during the roaming process. The IPv6 address for each MN during the first round of AMH process is uniquely identified by HA using the developed MN-ID field as a global permanent, which is identifying uniquely the IPv6 address of MN. Moreover, a temporary MN-ID is generated by access point each time an MN is associated with a particular AP and temporarily saved in a developed table inside the AP. When employing the AMH scheme, the handover process in the network layer is performed prior to its default time. That is, the mobility handover process in the network layer is tackled by a trigger developed AMH message to the next access point. Thus, a mobile node keeps communicating with the current access point while the network layer handover is executed by the next access point. The mathematical analyses and simulation results show that the proposed scheme performs better as compared with the existing approaches. PMID:25614890
An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base
NASA Astrophysics Data System (ADS)
Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi
Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.
Performance Evaluation of Remote Memory Access (RMA) Programming on Shared Memory Parallel Computers
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele; Biegel, Bryan A. (Technical Monitor)
2002-01-01
The purpose of this study is to evaluate the feasibility of remote memory access (RMA) programming on shared memory parallel computers. We discuss different RMA based implementations of selected CFD application benchmark kernels and compare them to corresponding message passing based codes. For the message-passing implementation we use MPI point-to-point and global communication routines. For the RMA based approach we consider two different libraries supporting this programming model. One is a shared memory parallelization library (SMPlib) developed at NASA Ames, the other is the MPI-2 extensions to the MPI Standard. We give timing comparisons for the different implementation strategies and discuss the performance.
Goldfield, Eugene C; Buonomo, Carlo; Fletcher, Kara; Perez, Jennifer; Margetts, Stacey; Hansen, Anne; Smith, Vincent; Ringer, Steven; Richardson, Michael J; Wolff, Peter H
2010-04-01
Coordination between movements of individual tongue points, and between soft palate elevation and tongue movements, were examined in 12 prematurely born infants referred from hospital NICUs for videofluoroscopic swallow study (VFSS) due to poor oral feeding and suspicion of aspiration. Detailed post-evaluation kinematic analysis was conducted by digitizing images of a lateral view of digitally superimposed points on the tongue and soft palate. The primary measure of coordination was continuous relative phase of the time series created by movements of points on the tongue and soft palate over successive frames. Three points on the tongue (anterior, medial, and posterior) were organized around a stable in-phase pattern, with a phase lag that implied an anterior to posterior direction of motion. Coordination between a tongue point and a point on the soft palate during lowering and elevation was close to anti-phase at initiation of the pharyngeal swallow. These findings suggest that anti-phase coordination between tongue and soft palate may reflect the process by which the tongue is timed to pump liquid by moving it into an enclosed space, compressing it, and allowing it to leave by a specific route through the pharynx. Copyright 2009 Elsevier Inc. All rights reserved.
A Semiparametric Change-Point Regression Model for Longitudinal Observations.
Xing, Haipeng; Ying, Zhiliang
2012-12-01
Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.
Höhne, Marlene; Jahanbekam, Amirhossein; Bauckhage, Christian; Axmacher, Nikolai; Fell, Juergen
2016-10-01
Mediotemporal EEG characteristics are closely related to long-term memory formation. It has been reported that rhinal and hippocampal EEG measures reflecting the stability of phases across trials are better suited to distinguish subsequently remembered from forgotten trials than event-related potentials or amplitude-based measures. Theoretical models suggest that the phase of EEG oscillations reflects neural excitability and influences cellular plasticity. However, while previous studies have shown that the stability of phase values across trials is indeed a relevant predictor of subsequent memory performance, the effect of absolute single-trial phase values has been little explored. Here, we reanalyzed intracranial EEG recordings from the mediotemporal lobe of 27 epilepsy patients performing a continuous word recognition paradigm. Two-class classification using a support vector machine was performed to predict subsequently remembered vs. forgotten trials based on individually selected frequencies and time points. We demonstrate that it is possible to successfully predict single-trial memory formation in the majority of patients (23 out of 27) based on only three single-trial phase values given by a rhinal phase, a hippocampal phase, and a rhinal-hippocampal phase difference. Overall classification accuracy across all subjects was 69.2% choosing frequencies from the range between 0.5 and 50Hz and time points from the interval between -0.5s and 2s. For 19 patients, above chance prediction of subsequent memory was possible even when choosing only time points from the prestimulus interval (overall accuracy: 65.2%). Furthermore, prediction accuracies based on single-trial phase surpassed those based on single-trial power. Our results confirm the functional relevance of mediotemporal EEG phase for long-term memory operations and suggest that phase information may be utilized for memory enhancement applications based on deep brain stimulation. Copyright © 2016 Elsevier Inc. All rights reserved.
Toward an integrated ice core chronology using relative and orbital tie-points
NASA Astrophysics Data System (ADS)
Bazin, L.; Landais, A.; Lemieux-Dudon, B.; Toyé Mahamadou Kele, H.; Blunier, T.; Capron, E.; Chappellaz, J.; Fischer, H.; Leuenberger, M.; Lipenkov, V.; Loutre, M.-F.; Martinerie, P.; Parrenin, F.; Prié, F.; Raynaud, D.; Veres, D.; Wolff, E.
2012-04-01
Precise ice cores chronologies are essential to better understand the mechanisms linking climate change to orbital and greenhouse gases concentration forcing. A tool for ice core dating (DATICE [developed by Lemieux-Dudon et al., 2010] permits to generate a common time-scale integrating relative and absolute dating constraints on different ice cores, using an inverse method. Nevertheless, this method has only been applied for a 4-ice cores scenario and for the 0-50 kyr time period. Here, we present the bases for an extension of this work back to 800 ka using (1) a compilation of published and new relative and orbital tie-points obtained from measurements of air trapped in ice cores and (2) an adaptation of the DATICE inputs to 5 ice cores for the last 800 ka. We first present new measurements of δ18Oatm and δO2/N2 on the Talos Dome and EPICA Dome C (EDC) ice cores with a particular focus on Marine Isotopic Stages (MIS) 5, and 11. Then, we show two tie-points compilations. The first one is based on new and published CH4 and δ18Oatm measurements on 5 ice cores (NorthGRIP, EPICA Dronning Maud Land, EDC, Talos Dome and Vostok) in order to produce a table of relative gas tie-points over the last 400 ka. The second one is based on new and published records of δO2/N2, δ18Oatm and air content to provide a table of orbital tie-points over the last 800 ka. Finally, we integrate the different dating constraints presented above in the DATICE tool adapted to 5 ice cores to cover the last 800 ka and show how these constraints compare with the established gas chronologies of each ice core.
Monitoring the Level of Students' GPAs over Time
ERIC Educational Resources Information Center
Bakir, Saad T.; McNeal, Bob
2010-01-01
A nonparametric (or distribution-free) statistical quality control chart is used to monitor the cumulative grade point averages (GPAs) of students over time. The chart is designed to detect any statistically significant positive or negative shifts in student GPAs from a desired target level. This nonparametric control chart is based on the…
Dating Violence, Bullying, and Sexual Harassment: Longitudinal Profiles and Transitions over Time
ERIC Educational Resources Information Center
Miller, Shari; Williams, Jason; Cutbush, Stacey; Gibbs, Deborah; Clinton-Sherrod, Monique; Jones, Sarah
2013-01-01
Although there is growing recognition of the problem of dating violence, little is known about how it unfolds among young adolescents who are just beginning to date. This study examined classes (subgroups) and transitions between classes over three time points based on dating violence, bullying, and sexual harassment perpetration and victimization…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-18
... labor-intensive events that extend over a period of time, such as facilitating a series of conference... information, and implementing activities that are supported by scientifically based research. Priority: In... for resolving, at the earliest point in time, disputes with those who provide services to children...
Gendered Pathways in School Burnout among Adolescents
ERIC Educational Resources Information Center
Salmela-Aro, Katariina; Tynkkynen, Lotta
2012-01-01
The aim of this study is to examine differences in student burnout by gender, time status with two time points before and after an educational transition, and educational track (academic vs. vocational). The definition of burnout is based on three components: exhaustion due to school demands, a disengaged and cynical attitude toward school, and…
Vehicle Routing Problem Using Genetic Algorithm with Multi Compartment on Vegetable Distribution
NASA Astrophysics Data System (ADS)
Kurnia, Hari; Gustri Wahyuni, Elyza; Cergas Pembrani, Elang; Gardini, Syifa Tri; Kurnia Aditya, Silfa
2018-03-01
The problem that is often gained by the industries of managing and distributing vegetables is how to distribute vegetables so that the quality of the vegetables can be maintained properly. The problems encountered include optimal route selection and little travel time or so-called TSP (Traveling Salesman Problem). These problems can be modeled using the Vehicle Routing Problem (VRP) algorithm with rating ranking, a cross order based crossing, and also order based mutation mutations on selected chromosomes. This study uses limitations using only 20 market points, 2 point warehouse (multi compartment) and 5 vehicles. It is determined that for one distribution, one vehicle can only distribute to 4 market points only from 1 particular warehouse, and also one such vehicle can only accommodate 100 kg capacity.
Evaluation of the leap motion controller as a new contact-free pointing device.
Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
2014-12-24
This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8% for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC.
Evaluation of the Leap Motion Controller as a New Contact-Free Pointing Device
Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
2015-01-01
This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8 % for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC. PMID:25609043
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
Two cloud-based cues for estimating scene structure and camera calibration.
Jacobs, Nathan; Abrams, Austin; Pless, Robert
2013-10-01
We describe algorithms that use cloud shadows as a form of stochastically structured light to support 3D scene geometry estimation. Taking video captured from a static outdoor camera as input, we use the relationship of the time series of intensity values between pairs of pixels as the primary input to our algorithms. We describe two cues that relate the 3D distance between a pair of points to the pair of intensity time series. The first cue results from the fact that two pixels that are nearby in the world are more likely to be under a cloud at the same time than two distant points. We describe methods for using this cue to estimate focal length and scene structure. The second cue is based on the motion of cloud shadows across the scene; this cue results in a set of linear constraints on scene structure. These constraints have an inherent ambiguity, which we show how to overcome by combining the cloud motion cue with the spatial cue. We evaluate our method on several time lapses of real outdoor scenes.
NASA Astrophysics Data System (ADS)
Wright, D. J.; Raad, M.; Hoel, E.; Park, M.; Mollenkopf, A.; Trujillo, R.
2016-12-01
Introduced is a new approach for processing spatiotemporal big data by leveraging distributed analytics and storage. A suite of temporally-aware analysis tools summarizes data nearby or within variable windows, aggregates points (e.g., for various sensor observations or vessel positions), reconstructs time-enabled points into tracks (e.g., for mapping and visualizing storm tracks), joins features (e.g., to find associations between features based on attributes, spatial relationships, temporal relationships or all three simultaneously), calculates point densities, finds hot spots (e.g., in species distributions), and creates space-time slices and cubes (e.g., in microweather applications with temperature, humidity, and pressure, or within human mobility studies). These "feature geo analytics" tools run in both batch and streaming spatial analysis mode as distributed computations across a cluster of servers on typical "big" data sets, where static data exist in traditional geospatial formats (e.g., shapefile) locally on a disk or file share, attached as static spatiotemporal big data stores, or streamed in near-real-time. In other words, the approach registers large datasets or data stores with ArcGIS Server, then distributes analysis across a cluster of machines for parallel processing. Several brief use cases will be highlighted based on a 16-node server cluster at 14 Gb RAM per node, allowing, for example, the buffering of over 8 million points or thousands of polygons in 1 minute. The approach is "hybrid" in that ArcGIS Server integrates open-source big data frameworks such as Apache Hadoop and Apache Spark on the cluster in order to run the analytics. In addition, the user may devise and connect custom open-source interfaces and tools developed in Python or Python Notebooks; the common denominator being the familiar REST API.
Alcohol consumption during adolescence is associated with reduced grey matter volumes.
Heikkinen, Noora; Niskanen, Eini; Könönen, Mervi; Tolmunen, Tommi; Kekkonen, Virve; Kivimäki, Petri; Tanila, Heikki; Laukkanen, Eila; Vanninen, Ritva
2017-04-01
Cognitive impairment has been associated with excessive alcohol use, but its neural basis is poorly understood. Chronic excessive alcohol use in adolescence may lead to neuronal loss and volumetric changes in the brain. Our objective was to compare the grey matter volumes of heavy- and light-drinking adolescents. This was a longitudinal study: heavy-drinking adolescents without an alcohol use disorder and their light-drinking controls were followed-up for 10 years using questionnaires at three time-points. Magnetic resonance imaging was conducted at the last time-point. The area near Kuopio University Hospital, Finland. The 62 participants were aged 22-28 years and included 35 alcohol users and 27 controls who had been followed-up for approximately 10 years. Alcohol use was measured by the Alcohol Use Disorders Identification Test (AUDIT)-C at three time-points during 10 years. Participants were selected based on their AUDIT-C score. Magnetic resonance imaging was conducted at the last time-point. Grey matter volume was determined and compared between heavy- and light-drinking groups using voxel-based morphometry on three-dimensional T1-weighted magnetic resonance images using predefined regions of interest and a threshold of P < 0.05, with small volume correction applied on cluster level. Grey matter volumes were significantly smaller among heavy-drinking participants in the bilateral anterior cingulate cortex, right orbitofrontal and frontopolar cortex, right superior temporal gyrus and right insular cortex compared to the control group (P < 0.05, family-wise error-corrected cluster level). Excessive alcohol use during adolescence appears to be associated with an abnormal development of the brain grey matter. Moreover, the structural changes detected in the insula of alcohol users may reflect a reduced sensitivity to alcohol's negative subjective effects. © 2016 Society for the Study of Addiction.
Distributed optical fiber vibration sensor based on Sagnac interference in conjunction with OTDR.
Pan, Chao; Liu, Xiaorui; Zhu, Hui; Shan, Xuekang; Sun, Xiaohan
2017-08-21
A real-time distributed optical fiber vibration sensing prototype based on the Sagnac interference in conjunction with the optical time domain reflectometry (OTDR) was developed. The sensing mechanism for single- and multi-points vibrations along the sensing fiber was analyzed theoretically and demonstrated experimentally. The experimental results show excellent agreement with the theoretical models. It is verified that single-point vibration induces a significantly abrupt and monotonous power change in the corresponding position of OTDR trace. As to multi-points vibrations, the detection of the following vibration is influenced by all previous ones. However, if the distance between the adjacent two vibrations is larger than half of the input optical pulse width, abrupt power changes induced by them are separate and still monotonous. A time-shifting differential module was developed and carried out to convert vibration-induced power changes to pulses. Consequently, vibrations can be located accurately by measuring peak or valley positions of the vibration-induced pulses. It is demonstrated that when the width and peak power of input optical pulse are set to 1 μs and 35 mW, respectively, the position error is less than ± 0.5 m in a sensing range of more than 16 km, with the spatial resolution of ~110 m.
Speech recognition for embedded automatic positioner for laparoscope
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Yin, Qingyun; Wang, Yi; Yu, Daoyin
2014-07-01
In this paper a novel speech recognition methodology based on Hidden Markov Model (HMM) is proposed for embedded Automatic Positioner for Laparoscope (APL), which includes a fixed point ARM processor as the core. The APL system is designed to assist the doctor in laparoscopic surgery, by implementing the specific doctor's vocal control to the laparoscope. Real-time respond to the voice commands asks for more efficient speech recognition algorithm for the APL. In order to reduce computation cost without significant loss in recognition accuracy, both arithmetic and algorithmic optimizations are applied in the method presented. First, depending on arithmetic optimizations most, a fixed point frontend for speech feature analysis is built according to the ARM processor's character. Then the fast likelihood computation algorithm is used to reduce computational complexity of the HMM-based recognition algorithm. The experimental results show that, the method shortens the recognition time within 0.5s, while the accuracy higher than 99%, demonstrating its ability to achieve real-time vocal control to the APL.
NASA Astrophysics Data System (ADS)
Cassan, Arnaud
2017-07-01
The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.
Liu, Ying; ZENG, Donglin; WANG, Yuanjia
2014-01-01
Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116
Improving Small Signal Stability through Operating Point Adjustment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.
2010-09-30
ModeMeter techniques for real-time small signal stability monitoring continue to mature, and more and more phasor measurements are available in power systems. It has come to the stage to bring modal information into real-time power system operation. This paper proposes to establish a procedure for Modal Analysis for Grid Operations (MANGO). Complementary to PSS’s and other traditional modulation-based control, MANGO aims to provide suggestions such as increasing generation or decreasing load for operators to mitigate low-frequency oscillations. Different from modulation-based control, the MANGO procedure proactively maintains adequate damping for all time, instead of reacting to disturbances when they occur. Effectmore » of operating points on small signal stability is presented in this paper. Implementation with existing operating procedures is discussed. Several approaches for modal sensitivity estimation are investigated to associate modal damping and operating parameters. The effectiveness of the MANGO procedure is confirmed through simulation studies of several test systems.« less
NASA Astrophysics Data System (ADS)
Pan, Yongping; Huang, Daoping
2011-03-01
In this comment, we point out the inappropriateness of Theorem 1 in the article [Tsung-Chih Lin, Mehdi Roopaei. Based on interval type-2 adaptive fuzzy H∞ tracking controller for SISO time-delay nonlinear systems. Commun Nonlinear Sci Numer Simulat 2010;15:4065-75]. For solving this problem, some formular mistakes are corrected and novel parameter adaptive laws of interval type-2 fuzzy neural network system are given.
MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.
Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram
2015-11-01
We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.
Online coupled camera pose estimation and dense reconstruction from video
Medioni, Gerard; Kang, Zhuoliang
2016-11-01
A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.
Attard, Samantha M; Howard, Annie-Green; Herring, Amy H; Zhang, Bing; Du, Shufa; Aiello, Allison E; Popkin, Barry M; Gordon-Larsen, Penny
2015-12-12
High urbanicity and income are risk factors for cardiovascular-related chronic diseases in low- and middle-income countries, perhaps due to low physical activity (PA) in urban, high income areas. Few studies have examined differences in PA over time according to income and urbanicity in a country experiencing rapid urbanization. We used data from the China Health and Nutrition Survey, a population-based cohort of Chinese adults (n = 20,083; ages 18-75y) seen a maximum of 7 times from 1991-2009. We used sex-stratified, zero-inflated negative binomial regression models to examine occupational, domestic, leisure, travel, and total PA in Chinese adults according to year, urbanicity, income, and the interactions among urbanicity, income, and year, controlling for age and region of China. We showed larger mean temporal PA declines for individuals living in relatively low urbanicity areas (1991: 500 MET-hours/week; 2009: 300 MET-hours/week) compared to high urbanicity areas (1991: 200 MET-hours/week; 2009: 125 MET-hours/week). In low urbanicity areas, the association between income and total PA went from negative in 1991 (p < 0.05) to positive by 2000 (p < 0.05). In relatively high urbanicity areas, the income-PA relationship was positive at all time points and was statistically significant at most time points after 1997 (p < 0.05). Leisure PA was the only domain of PA that increased over time, but >95% of individuals in low urbanicity areas reported zero leisure PA at each time point. Our findings show changing associations for income and urbanicity with PA over 18 years of urbanization. Total PA was lower for individuals living in more versus less urban areas at all time points. However, these differences narrowed over time, which may relate to increases in individual-level income in less urban areas of China with urbanization. Low-income individuals in higher urbanicity areas are a particularly critical group to target to increase PA in China.
ICASE Semiannual Report, 1 April 1990 - 30 September 1990
1990-11-01
underlies parallel simulation protocols that synchronize based on logical time (all known approaches). This framework describes a suf- ficient set of...conducted primarily by visiting scientists from universities and from industry, who have resident appointments for limited periods of time , and by consultants...wave equation with point sources and semireflecting impedance boundary conditions. For sources that are piece- wise polynomial in time we get a finite
Labeling research in support of through-the-season area estimation
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator); Hay, C. M.; Sheffner, E. J.
1982-01-01
The development of LANDSAT-based through-the-season labeling procedures for corn and soybeans is discussed. A model for predicting labeling accuracy within key time periods throughout the growing season is outlined. Two methods for establishing the starting point of one key time period, viz., early season, are described. In addition, spectral-temporal characteristics for separating crops in the early season time period are discussed.
Detection of Subtle Cognitive Changes after mTBI Using a Novel Tablet-Based Task.
Fischer, Tara D; Red, Stuart D; Chuang, Alice Z; Jones, Elizabeth B; McCarthy, James J; Patel, Saumil S; Sereno, Anne B
2016-07-01
This study examined the potential for novel tablet-based tasks, modeled after eye tracking techniques, to detect subtle sensorimotor and cognitive deficits after mild traumatic brain injury (mTBI). Specifically, we examined whether performance on these tablet-based tasks (Pro-point and Anti-point) was able to correctly categorize concussed versus non-concussed participants, compared with performance on other standardized tests for concussion. Patients admitted to the emergency department with mTBI were tested on the Pro-point and Anti-point tasks, a current standard cognitive screening test (i.e., the Standard Assessment of Concussion [SAC]), and another eye movement-based tablet test, the King-Devick(®) (KD). Within hours after injury, mTBI patients showed significant slowing in response times, compared with both orthopedic and age-matched control groups, in the Pro-point task, demonstrating deficits in sensorimotor function. Mild TBI patients also showed significant slowing, compared with both control groups, on the Anti-point task, even when controlling for sensorimotor slowing, indicating deficits in cognitive function. Performance on the SAC test revealed similar deficits of cognitive function in the mTBI group, compared with the age-matched control group; however, the KD test showed no evidence of cognitive slowing in mTBI patients, compared with either control group. Further, measuring the sensitivity and specificity of these tasks to accurately predict mTBI with receiver operating characteristic analysis indicated that the Anti-point and Pro-point tasks reached excellent levels of accuracy and fared better than current standardized tools for assessment of concussion. Our findings suggest that these rapid tablet-based tasks are able to reliably detect and measure functional impairment in cognitive and sensorimotor control within hours after mTBI. These tasks may provide a more sensitive diagnostic measure for functional deficits that could prove key to earlier detection of concussion, evaluation of interventions, or even prediction of persistent symptoms.
Detection of Subtle Cognitive Changes after mTBI Using a Novel Tablet-Based Task
Red, Stuart D.; Chuang, Alice Z.; Jones, Elizabeth B.; McCarthy, James J.; Patel, Saumil S.; Sereno, Anne B.
2016-01-01
Abstract This study examined the potential for novel tablet-based tasks, modeled after eye tracking techniques, to detect subtle sensorimotor and cognitive deficits after mild traumatic brain injury (mTBI). Specifically, we examined whether performance on these tablet-based tasks (Pro-point and Anti-point) was able to correctly categorize concussed versus non-concussed participants, compared with performance on other standardized tests for concussion. Patients admitted to the emergency department with mTBI were tested on the Pro-point and Anti-point tasks, a current standard cognitive screening test (i.e., the Standard Assessment of Concussion [SAC]), and another eye movement–based tablet test, the King-Devick® (KD). Within hours after injury, mTBI patients showed significant slowing in response times, compared with both orthopedic and age-matched control groups, in the Pro-point task, demonstrating deficits in sensorimotor function. Mild TBI patients also showed significant slowing, compared with both control groups, on the Anti-point task, even when controlling for sensorimotor slowing, indicating deficits in cognitive function. Performance on the SAC test revealed similar deficits of cognitive function in the mTBI group, compared with the age-matched control group; however, the KD test showed no evidence of cognitive slowing in mTBI patients, compared with either control group. Further, measuring the sensitivity and specificity of these tasks to accurately predict mTBI with receiver operating characteristic analysis indicated that the Anti-point and Pro-point tasks reached excellent levels of accuracy and fared better than current standardized tools for assessment of concussion. Our findings suggest that these rapid tablet-based tasks are able to reliably detect and measure functional impairment in cognitive and sensorimotor control within hours after mTBI. These tasks may provide a more sensitive diagnostic measure for functional deficits that could prove key to earlier detection of concussion, evaluation of interventions, or even prediction of persistent symptoms. PMID:26398492
GIS Based System for Post-Earthquake Crisis Managment Using Cellular Network
NASA Astrophysics Data System (ADS)
Raeesi, M.; Sadeghi-Niaraki, A.
2013-09-01
Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the destroyed areas. The search and rescue phase usually is maintained for many days. Time reduction for surviving people is very important. A Geographical Information System (GIS) can be used for decreasing response time and management in critical situations. Position estimation in short period of time time is important. This paper proposes a GIS based system for post-earthquake disaster management solution. This system relies on several mobile positioning methods such as cell-ID and TA method, signal strength method, angel of arrival method, time of arrival method and time difference of arrival method. For quick positioning, the system can be helped by any person who has a mobile device. After positioning and specifying the critical points, the points are sent to a central site for managing the procedure of quick response for helping. This solution establishes a quick way to manage the post-earthquake crisis.
A vision-based end-point control for a two-link flexible manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Obergfell, Klaus
1991-01-01
The measurement and control of the end-effector position of a large two-link flexible manipulator are investigated. The system implementation is described and an initial algorithm for static end-point positioning is discussed. Most existing robots are controlled through independent joint controllers, while the end-effector position is estimated from the joint positions using a kinematic relation. End-point position feedback can be used to compensate for uncertainty and structural deflections. Such feedback is especially important for flexible robots. Computer vision is utilized to obtain end-point position measurements. A look-and-move control structure alleviates the disadvantages of the slow and variable computer vision sampling frequency. This control structure consists of an inner joint-based loop and an outer vision-based loop. A static positioning algorithm was implemented and experimentally verified. This algorithm utilizes the manipulator Jacobian to transform a tip position error to a joint error. The joint error is then used to give a new reference input to the joint controller. The convergence of the algorithm is demonstrated experimentally under payload variation. A Landmark Tracking System (Dickerson, et al 1990) is used for vision-based end-point measurements. This system was modified and tested. A real-time control system was implemented on a PC and interfaced with the vision system and the robot.
Liu, Xiao-Na; Zheng, Qiu-Sheng; Che, Xiao-Qing; Wu, Zhi-Sheng; Qiao, Yan-Jiang
2017-03-01
The blending end-point determination of Angong Niuhuang Wan (AGNH) is a key technology problem. The control strategy based on quality by design (QbD) concept proposes a whole blending end-point determination method, and provides a methodology for blending the Chinese materia medica containing mineral substances. Based on QbD concept, the laser induced breakdown spectroscopy (LIBS) was used to assess the cinnabar, realgar and pearl powder blending of AGNH in a pilot-scale experiment, especially the whole blending end-point in this study. The blending variability of three mineral medicines including cinnabar, realgar and pearl powder, was measured by moving window relative standard deviation (MWRSD) based on LIBS. The time profiles of realgar and pearl powder did not produce consistent results completely, but all of them reached even blending at the last blending stage, so that the whole proposal blending end point was determined. LIBS is a promising Process Analytical Technology (PAT) for process control. Unlike other elemental determination technologies such ICP-OES, LIBS does not need an elaborate digestion procedure, which is a promising and rapid technique to understand the blending process of Chinese materia medica (CMM) containing cinnabar, realgar and other mineral traditional Chinese medicine. This study proposed a novel method for the research of large varieties of traditional Chinese medicines.. Copyright© by the Chinese Pharmaceutical Association.
Space-time measurements of oceanic sea states
NASA Astrophysics Data System (ADS)
Fedele, Francesco; Benetazzo, Alvise; Gallego, Guillermo; Shih, Ping-Chang; Yezzi, Anthony; Barbariol, Francesco; Ardhuin, Fabrice
2013-10-01
Stereo video techniques are effective for estimating the space-time wave dynamics over an area of the ocean. Indeed, a stereo camera view allows retrieval of both spatial and temporal data whose statistical content is richer than that of time series data retrieved from point wave probes. We present an application of the Wave Acquisition Stereo System (WASS) for the analysis of offshore video measurements of gravity waves in the Northern Adriatic Sea and near the southern seashore of the Crimean peninsula, in the Black Sea. We use classical epipolar techniques to reconstruct the sea surface from the stereo pairs sequentially in time, viz. a sequence of spatial snapshots. We also present a variational approach that exploits the entire data image set providing a global space-time imaging of the sea surface, viz. simultaneous reconstruction of several spatial snapshots of the surface in order to guarantee continuity of the sea surface both in space and time. Analysis of the WASS measurements show that the sea surface can be accurately estimated in space and time together, yielding associated directional spectra and wave statistics at a point in time that agrees well with probabilistic models. In particular, WASS stereo imaging is able to capture typical features of the wave surface, especially the crest-to-trough asymmetry due to second order nonlinearities, and the observed shape of large waves are fairly described by theoretical models based on the theory of quasi-determinism (Boccotti, 2000). Further, we investigate space-time extremes of the observed stationary sea states, viz. the largest surface wave heights expected over a given area during the sea state duration. The WASS analysis provides the first experimental proof that a space-time extreme is generally larger than that observed in time via point measurements, in agreement with the predictions based on stochastic theories for global maxima of Gaussian fields.
Biomarkers and biometric measures of adherence to use of ARV-based vaginal rings.
Stalter, Randy M; Moench, Thomas R; MacQueen, Kathleen M; Tolley, Elizabeth E; Owen, Derek H
2016-01-01
Poor adherence to product use has been observed in recent trials of antiretroviral (ARV)-based oral and vaginal gel HIV prevention products, resulting in an inability to determine product efficacy. The delivery of microbicides through vaginal rings is widely perceived as a way to achieve better adherence but vaginal rings do not eliminate the adherence challenges exhibited in clinical trials. Improved objective measures of adherence are needed as new ARV-based vaginal ring products enter the clinical trial stage. To identify technologies that have potential future application for vaginal ring adherence measurement, a comprehensive literature search was conducted that covered a number of biomedical and public health databases, including PubMed, Embase, POPLINE and the Web of Science. Published patents and patent applications were also searched. Technical experts were also consulted to gather more information and help evaluate identified technologies. Approaches were evaluated as to feasibility of development and clinical trial implementation, cost and technical strength. Numerous approaches were identified through our landscape analysis and classified as either point measures or cumulative measures of vaginal ring adherence. Point measurements are those that give a measure of adherence at a particular point in time. Cumulative measures attempt to measure ring adherence over a period of time. Approaches that require modifications to an existing ring product are at a significant disadvantage, as this will likely introduce additional regulatory barriers to the development process and increase manufacturing costs. From the point of view of clinical trial implementation, desirable attributes would be high acceptance by trial participants, and little or no additional time or training requirements on the part of participants or clinic staff. We have identified four promising approaches as being high priority for further development based on the following measurements: intracellular drug levels, drug levels in hair, the accumulation of a vaginal analyte that diffuses into the ring, and the depletion of an intrinsic ring constituent. While some approaches show significant promise over others, it is recommended that a strategy of using complementary biometric and behavioural approaches be adopted to best understand participants' adherence to ARV-based ring products in clinical trials.
Ultrasound based mitral valve annulus tracking for off-pump beating heart mitral valve repair
NASA Astrophysics Data System (ADS)
Li, Feng P.; Rajchl, Martin; Moore, John; Peters, Terry M.
2014-03-01
Mitral regurgitation (MR) occurs when the mitral valve cannot close properly during systole. The NeoChordtool aims to repair MR by implanting artificial chordae tendineae on flail leaflets inside the beating heart, without a cardiopulmonary bypass. Image guidance is crucial for such a procedure due to the lack of direct vision of the targets or instruments. While this procedure is currently guided solely by transesophageal echocardiography (TEE), our previous work has demonstrated that guidance safety and efficiency can be significantly improved by employing augmented virtuality to provide virtual presentation of mitral valve annulus (MVA) and tools integrated with real time ultrasound image data. However, real-time mitral annulus tracking remains a challenge. In this paper, we describe an image-based approach to rapidly track MVA points on 2D/biplane TEE images. This approach is composed of two components: an image-based phasing component identifying images at optimal cardiac phases for tracking, and a registration component updating the coordinates of MVA points. Preliminary validation has been performed on porcine data with an average difference between manually and automatically identified MVA points of 2.5mm. Using a parallelized implementation, this approach is able to track the mitral valve at up to 10 images per second.
Paper-based sample-to-answer molecular diagnostic platform for point-of-care diagnostics.
Choi, Jane Ru; Tang, Ruihua; Wang, ShuQi; Wan Abas, Wan Abu Bakar; Pingguan-Murphy, Belinda; Xu, Feng
2015-12-15
Nucleic acid testing (NAT), as a molecular diagnostic technique, including nucleic acid extraction, amplification and detection, plays a fundamental role in medical diagnosis for timely medical treatment. However, current NAT technologies require relatively high-end instrumentation, skilled personnel, and are time-consuming. These drawbacks mean conventional NAT becomes impractical in many resource-limited disease-endemic settings, leading to an urgent need to develop a fast and portable NAT diagnostic tool. Paper-based devices are typically robust, cost-effective and user-friendly, holding a great potential for NAT at the point of care. In view of the escalating demand for the low cost diagnostic devices, we highlight the beneficial use of paper as a platform for NAT, the current state of its development, and the existing challenges preventing its widespread use. We suggest a strategy involving integrating all three steps of NAT into one single paper-based sample-to-answer diagnostic device for rapid medical diagnostics in the near future. Copyright © 2015 Elsevier B.V. All rights reserved.
Thermalization of Wightman functions in AdS/CFT and quasinormal modes
NASA Astrophysics Data System (ADS)
Keränen, Ville; Kleinert, Philipp
2016-07-01
We study the time evolution of Wightman two-point functions of scalar fields in AdS3 -Vaidya, a spacetime undergoing gravitational collapse. In the boundary field theory, the collapse corresponds to a quench process where the dual 1 +1 -dimensional CFT is taken out of equilibrium and subsequently thermalizes. From the two-point function, we extract an effective occupation number in the boundary theory and study how it approaches the thermal Bose-Einstein distribution. We find that the Wightman functions, as well as the effective occupation numbers, thermalize with a rate set by the lowest quasinormal mode of the scalar field in the BTZ black hole background. We give a heuristic argument for the quasinormal decay, which is expected to apply to more general Vaidya spacetimes also in higher dimensions. This suggests a unified picture in which thermalization times of one- and two-point functions are determined by the lowest quasinormal mode. Finally, we study how these results compare to previous calculations of two-point functions based on the geodesic approximation.
Deep-Focusing Time-Distance Helioseismology
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.; Jensen, J. M.; Kosovichev, A. G.; Birch, A. C.; Fisher, Richard R. (Technical Monitor)
2001-01-01
Much progress has been made by measuring the travel times of solar acoustic waves from a central surface location to points at equal arc distance away. Depth information is obtained from the range of arc distances examined, with the larger distances revealing the deeper layers. This method we will call surface-focusing, as the common point, or focus, is at the surface. To obtain a clearer picture of the subsurface region, it would, no doubt, be better to focus on points below the surface. Our first attempt to do this used the ray theory to pick surface location pairs that would focus on a particular subsurface point. This is not the ideal procedure, as Born approximation kernels suggest that this focus should have zero sensitivity to sound speed inhomogeneities. However, the sensitivity is concentrated below the surface in a much better way than the old surface-focusing method, and so we expect the deep-focusing method to be more sensitive. A large sunspot group was studied by both methods. Inversions based on both methods will be compared.
Dynamic Analysis of a Reaction-Diffusion Rumor Propagation Model
NASA Astrophysics Data System (ADS)
Zhao, Hongyong; Zhu, Linhe
2016-06-01
The rapid development of the Internet, especially the emergence of the social networks, leads rumor propagation into a new media era. Rumor propagation in social networks has brought new challenges to network security and social stability. This paper, based on partial differential equations (PDEs), proposes a new SIS rumor propagation model by considering the effect of the communication between the different rumor infected users on rumor propagation. The stabilities of a nonrumor equilibrium point and a rumor-spreading equilibrium point are discussed by linearization technique and the upper and lower solutions method, and the existence of a traveling wave solution is established by the cross-iteration scheme accompanied by the technique of upper and lower solutions and Schauder’s fixed point theorem. Furthermore, we add the time delay to rumor propagation and deduce the conditions of Hopf bifurcation and stability switches for the rumor-spreading equilibrium point by taking the time delay as the bifurcation parameter. Finally, numerical simulations are performed to illustrate the theoretical results.
Role of Grain Boundaries under Long-Time Radiation
NASA Astrophysics Data System (ADS)
Zhu, Yichao; Luo, Jing; Guo, Xu; Xiang, Yang; Chapman, Stephen Jonathan
2018-06-01
Materials containing a high proportion of grain boundaries offer significant potential for the development of radiation-resistant structural materials. However, a proper understanding of the connection between the radiation-induced microstructural behavior of a grain boundary and its impact at long natural time scales is still missing. In this Letter, point defect absorption at interfaces is summarized by a jump Robin-type condition at a coarse-grained level, wherein the role of interface microstructure is effectively taken into account. Then a concise formula linking the sink strength of a polycrystalline aggregate with its grain size is introduced and is well compared with experimental observation. Based on the derived model, a coarse-grained formulation incorporating the coupled evolution of grain boundaries and point defects is proposed, so as to underpin the study of long-time morphological evolution of grains induced by irradiation. Our simulation results suggest that the presence of point defect sources within a grain further accelerates its shrinking process, and radiation tends to trigger the extension of twin boundary sections.
Self-Regulated Learning in Younger and Older Adults: Does Aging Affect Metacognitive Control?
Price, Jodi; Hertzog, Christopher; Dunlosky, John
2011-01-01
Two experiments examined whether younger and older adults’ self-regulated study (item selection and study time) conformed to the region of proximal learning (RPL) model when studying normatively easy, medium, and difficult vocabulary pairs. Experiment 2 manipulated the value of recalling different pairs and provided learning goals for words recalled and points earned. Younger and older adults in both experiments selected items for study in an easy-to-difficult order, indicating the RPL model applies to older adults’ self-regulated study. Individuals allocated more time to difficult items, but prioritized easier items when given less time or point values favoring difficult items. Older adults studied more items for longer but realized lower recall than did younger adults. Older adults’ lower memory self-efficacy and perceived control correlated with their greater item restudy and avoidance of difficult items with high point values. Results are discussed in terms of RPL and agenda-based regulation models. PMID:19866382
Radiologists' preferences for just-in-time learning.
Kahn, Charles E; Ehlers, Kevin C; Wood, Beverly P
2006-09-01
Effective learning can occur at the point of care, when opportunities arise to acquire information and apply it to a clinical problem. To assess interest in point-of-care learning, we conducted a survey to explore radiologists' attitudes and preferences regarding the use of just-in-time learning (JITL) in radiology. Following Institutional Review Board approval, we invited 104 current radiology residents and 86 radiologists in practice to participate in a 12-item Internet-based survey to assess their attitudes toward just-in-time learning. Voluntary participation in the survey was solicited by e-mail; respondents completed the survey on a web-based form. Seventy-nine physicians completed the questionnaire, including 47 radiology residents and 32 radiologists in practice; the overall response rate was 42%. Respondents generally expressed a strong interest for JITL: 96% indicated a willingness to try such a system, and 38% indicated that they definitely would use a JITL system. They expressed a preference for learning interventions of 5-10 min in length. Current and recent radiology trainees have expressed a strong interest in just-in-time learning. The information from this survey should be useful in pursuing the design of learning interventions and systems for delivering just-in-time learning to radiologists.
Ge, Lei; Yan, Jixian; Song, Xianrang; Yan, Mei; Ge, Shenguang; Yu, Jinghua
2012-02-01
In this work, electrochemiluminescence (ECL) immunoassay was introduced into the recently proposed microfluidic paper-based analytical device (μPADs) based on directly screen-printed electrodes on paper for the very first time. The screen-printed paper-electrodes will be more important for further development of this paper-based ECL device in simple, low-cost and disposable application than commercialized ones. To further perform high-performance, high-throughput, simple and inexpensive ECL immunoassay on μPAD for point-of-care testing, a wax-patterned three-dimensional (3D) paper-based ECL device was demonstrated for the very first time. In this 3D paper-based ECL device, eight carbon working electrodes including their conductive pads were screen-printed on a piece of square paper and shared the same Ag/AgCl reference and carbon counter electrodes on another piece of square paper after stacking. Using typical tris-(bipyridine)-ruthenium (Ⅱ) - tri-n-propylamine ECL system, the application test of this 3D paper-based ECL device was performed through the diagnosis of four tumor markers in real clinical serum samples. With the aid of a facile device-holder and a section-switch assembled on the analyzer, eight working electrodes were sequentially placed into the circuit to trigger the ECL reaction in the sweeping range from 0.5 to 1.1 V at room temperature. In addition, this 3D paper-based ECL device can be easily integrated and combined with the recently emerging paper electronics to further develop simple, sensitive, low-cost, disposable and portable μPAD for point-of-care testing, public health and environmental monitoring in remote regions, developing or developed countries. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.
2016-06-01
We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.
Time-frequency approach to underdetermined blind source separation.
Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong
2012-02-01
This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.
NMR diffusion simulation based on conditional random walk.
Gudbjartsson, H; Patz, S
1995-01-01
The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.
NASA Astrophysics Data System (ADS)
Zheng, Mingwen; Li, Lixiang; Peng, Haipeng; Xiao, Jinghua; Yang, Yixian; Zhang, Yanping; Zhao, Hui
2018-06-01
This paper mainly studies the finite-time stability and synchronization problems of memristor-based fractional-order fuzzy cellular neural network (MFFCNN). Firstly, we discuss the existence and uniqueness of the Filippov solution of the MFFCNN according to the Banach fixed point theorem and give a sufficient condition for the existence and uniqueness of the solution. Secondly, a sufficient condition to ensure the finite-time stability of the MFFCNN is obtained based on the definition of finite-time stability of the MFFCNN and Gronwall-Bellman inequality. Thirdly, by designing a simple linear feedback controller, the finite-time synchronization criterion for drive-response MFFCNN systems is derived according to the definition of finite-time synchronization. These sufficient conditions are easy to verify. Finally, two examples are given to show the effectiveness of the proposed results.
Performance Based Logistics... What’s Stopping Us
2016-03-01
performance-based life cycle product support, where outcomes are acquired through performance-based arrangements that deliver Warfighter requirements and...correlates to the acquisition life cycle framework: spend the time and effort to identify and lock in the PBL requirements; conduct an analysis to...PDASD[L&MR]) on PBL strategies. The study, Project Proof Point: A Study to Determine the Impact of Performance Based Logistics (PBL) on Life Cycle
NASA Astrophysics Data System (ADS)
Jorgenson, J. C.; Jorgenson, M. T.; Boldenow, M.; Orndahl, K. M.
2016-12-01
We documented landscape change over a 60 year period in the Arctic National Wildlife Refuge in northeastern Alaska using aerial photographs and satellite images. We used a stratified random sample to allow inference to the whole refuge (78,050 km2), with five random sites in each of seven ecoregions. Each site (2 km2) had a systematic grid of 100 points for a total of 3500 points. We chose study sites in the overlap area covered by acceptable imagery in three time periods: aerial photographs from 1947 - 1955 and 1978 - 1988, Quick Bird and IKONOS satellite images from 2000 - 2007.At each point a 10 meter radius circle was visually evaluated in ARC-MAP for each time period for vegetation type, disturbance, presence of ice wedge polygon microtopography and surface water. A landscape change category was assigned to each point based on differences detected between the three periods. Change types were assigned for time interval 1, interval 2 and overall. Additional explanatory variables included elevation, slope, aspect, geology, physiography and temperature. Overall, 23% of points changed over the study period. Fire was the most common change agent, affecting 28% of the Boreal Forest points. The next most common change was degradation of soil ice wedges (thermokarst), detected at 12% of the points on the North Slope Tundra. The other most common changes included increase in cover of trees or shrubs (7% of Boreal Forest and Brooks Range points) and erosion or deposition on river floodplains and at the Beaufort Sea coast. Changes on the North Slope Tundra tended to be related to landscape wetting, mainly thermokarst. Changes in the Boreal Forest tended to involve landscape drying, including fire, reduced area of lakes and tree increase on wet sites. The second time interval coincided with a shift towards a warmer climate and had greater change in several categories including thermokarst, lake changes and tree and shrub increase.
An ultrahigh-accuracy Miniature Dew Point Sensor based on an Integrated Photonics Platform.
Tao, Jifang; Luo, Yu; Wang, Li; Cai, Hong; Sun, Tao; Song, Junfeng; Liu, Hui; Gu, Yuandong
2016-07-15
The dew point is the temperature at which vapour begins to condense out of the gaseous phase. The deterministic relationship between the dew point and humidity is the basis for the industry-standard "chilled-mirror" dew point hygrometers used for highly accurate humidity measurements, which are essential for a broad range of industrial and metrological applications. However, these instruments have several limitations, such as high cost, large size and slow response. In this report, we demonstrate a compact, integrated photonic dew point sensor (DPS) that features high accuracy, a small footprint, and fast response. The fundamental component of this DPS is a partially exposed photonic micro-ring resonator, which serves two functions simultaneously: 1) sensing the condensed water droplets via evanescent fields and 2) functioning as a highly accurate, in situ temperature sensor based on the thermo-optic effect (TOE). This device virtually eliminates most of the temperature-related errors that affect conventional "chilled-mirror" hygrometers. Moreover, this DPS outperforms conventional "chilled-mirror" hygrometers with respect to size, cost and response time, paving the way for on-chip dew point detection and extension to applications for which the conventional technology is unsuitable because of size, cost, and other constraints.
An ultrahigh-accuracy Miniature Dew Point Sensor based on an Integrated Photonics Platform
NASA Astrophysics Data System (ADS)
Tao, Jifang; Luo, Yu; Wang, Li; Cai, Hong; Sun, Tao; Song, Junfeng; Liu, Hui; Gu, Yuandong
2016-07-01
The dew point is the temperature at which vapour begins to condense out of the gaseous phase. The deterministic relationship between the dew point and humidity is the basis for the industry-standard “chilled-mirror” dew point hygrometers used for highly accurate humidity measurements, which are essential for a broad range of industrial and metrological applications. However, these instruments have several limitations, such as high cost, large size and slow response. In this report, we demonstrate a compact, integrated photonic dew point sensor (DPS) that features high accuracy, a small footprint, and fast response. The fundamental component of this DPS is a partially exposed photonic micro-ring resonator, which serves two functions simultaneously: 1) sensing the condensed water droplets via evanescent fields and 2) functioning as a highly accurate, in situ temperature sensor based on the thermo-optic effect (TOE). This device virtually eliminates most of the temperature-related errors that affect conventional “chilled-mirror” hygrometers. Moreover, this DPS outperforms conventional “chilled-mirror” hygrometers with respect to size, cost and response time, paving the way for on-chip dew point detection and extension to applications for which the conventional technology is unsuitable because of size, cost, and other constraints.
Automatic extraction of the mid-sagittal plane using an ICP variant
NASA Astrophysics Data System (ADS)
Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus
2008-03-01
Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.
NASA Astrophysics Data System (ADS)
Li, Na; Gong, Xingyu; Li, Hongan; Jia, Pengtao
2018-01-01
For faded relics, such as Terracotta Army, the 2D-3D registration between an optical camera and point cloud model is an important part for color texture reconstruction and further applications. This paper proposes a nonuniform multiview color texture mapping for the image sequence and the three-dimensional (3D) model of point cloud collected by Handyscan3D. We first introduce nonuniform multiview calibration, including the explanation of its algorithm principle and the analysis of its advantages. We then establish transformation equations based on sift feature points for the multiview image sequence. At the same time, the selection of nonuniform multiview sift feature points is introduced in detail. Finally, the solving process of the collinear equations based on multiview perspective projection is given with three steps and the flowchart. In the experiment, this method is applied to the color reconstruction of the kneeling figurine, Tangsancai lady, and general figurine. These results demonstrate that the proposed method provides an effective support for the color reconstruction of the faded cultural relics and be able to improve the accuracy of 2D-3D registration between the image sequence and the point cloud model.
Luo, Xiaoteng; Hsing, I-Ming
2009-10-01
Nucleic acid based analysis provides accurate differentiation among closely affiliated species and this species- and sequence-specific detection technique would be particularly useful for point-of-care (POC) testing for prevention and early detection of highly infectious and damaging diseases. Electrochemical (EC) detection and polymerase chain reaction (PCR) are two indispensable steps, in our view, in a nucleic acid based point-of-care testing device as the former, in comparison with the fluorescence counterpart, provides inherent advantages of detection sensitivity, device miniaturization and operation simplicity, and the latter offers an effective way to boost the amount of targets to a detectable quantity. In this mini-review, we will highlight some of the interesting investigations using the combined EC detection and PCR amplification approaches for end-point detection and real-time monitoring. The promise of current approaches and the direction for future investigations will be discussed. It would be our view that the synergistic effect of the combined EC-PCR steps in a portable device provides a promising detection technology platform that will be ready for point-of-care applications in the near future.
Wang, Ce; Bi, Jun; Zhang, Xu-Xiang; Fang, Qiang; Qi, Yi
2018-05-25
Influent river carrying cumulative watershed load plays a significant role in promoting nuisance algal bloom in river-fed lake. It is most relevant to discern in-stream water quality exceedance and evaluate the spatial relationship between risk location and potential pollution sources. However, no comprehensive studies of source tracking in watershed based on management grid have been conducted for refined water quality management, particularly for plain terrain with complex river network. In this study, field investigations were implemented during 2014 in Taige Canal watershed of Taihu Lake Basin. A Geographical Information System (GIS)-based spatial relationship model was established to characterize the spatial relationships of "point (point-source location and monitoring site)-line (river segment)-plane (catchment)." As a practical exemplification, in-time source tracking was triggered on April 15, 2015 at Huangnianqiao station, where TN and TP concentration violated the water quality standard (TN 4.0 mg/L, TP 0.15 mg/L). Of the target grid cells, 53 and 46 were identified as crucial areas having high pollution intensity for TN and TP pollution, respectively. The estimated non-point source load in each grid cell could be apportioned into different source types based on spatial pollution-related entity objects. We found that the non-point source load derived from rural sewage and livestock and poultry breeding accounted for more than 80% of total TN or TP load than another source type of crop farming. The approach in this study would be of great benefit to local authorities for identifying the serious polluted regions and efficiently making environmental policies to reduce watershed load.
Individuation in Quantum Mechanics and Space-Time
NASA Astrophysics Data System (ADS)
Jaeger, Gregg
2010-10-01
Two physical approaches—as distinct, under the classification of Mittelstaedt, from formal approaches—to the problem of individuation of quantum objects are considered, one formulated in spatiotemporal terms and one in quantum mechanical terms. The spatiotemporal approach itself has two forms: one attributed to Einstein and based on the ontology of space-time points, and the other proposed by Howard and based on intersections of world lines. The quantum mechanical approach is also provided here in two forms, one based on interference and another based on a new Quantum Principle of Individuation (QPI). It is argued that the space-time approach to individuation fails and that the quantum approach offers several advantages over it, including consistency with Leibniz’s Principle of Identity of Indiscernibles.
EOS-AM precision pointing verification
NASA Technical Reports Server (NTRS)
Throckmorton, A.; Braknis, E.; Bolek, J.
1993-01-01
The Earth Observing System (EOS) AM mission requires tight pointing knowledge to meet scientific objectives, in a spacecraft with low frequency flexible appendage modes. As the spacecraft controller reacts to various disturbance sources and as the inherent appendage modes are excited by this control action, verification of precision pointing knowledge becomes particularly challenging for the EOS-AM mission. As presently conceived, this verification includes a complementary set of multi-disciplinary analyses, hardware tests and real-time computer in the loop simulations, followed by collection and analysis of hardware test and flight data and supported by a comprehensive data base repository for validated program values.
NASA Astrophysics Data System (ADS)
Moritz, Katharina; Kleinrahm, Reiner; McLinden, Mark O.; Richter, Markus
2017-12-01
For the determination of dew-point densities and pressures of fluid mixtures, a new densimeter has been developed. The new apparatus is based on the well-established two-sinker density measurement principle with the additional capability of quantifying sorption effects. In the vicinity of the dew line, such effects cause a change in composition of the gas mixture under study, which can significantly distort accurate density measurements. The new experimental technique enables the accurate measurement of dew-point densities and pressures and the quantification of sorption effects at the same time.
Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models
NASA Astrophysics Data System (ADS)
Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.
2017-12-01
While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API interface to our Enhanced Magnetic Model (EMM).
Effect of Energy Drinks on Discoloration of Silorane and Dimethacrylate-Based Composite Resins
Ahmadizenouz, Ghazaleh; Esmaeili, Behnaz; Ahangari, Zohreh; Khafri, Soraya; Rahmani, Aghil
2016-01-01
Objectives: This study aimed to assess the effects of two energy drinks on color change (ΔE) of two methacrylate-based and a silorane-based composite resin after one week and one month. Materials and Methods: Thirty cubic samples were fabricated from Filtek P90, Filtek Z250 and Filtek Z350XT composite resins. All the specimens were stored in distilled water at 37°C for 24 hours. Baseline color values (L*a*b*) of each specimen were measured using a spectrophotometer according to the CIEL*a*b* color system. Ten randomly selected specimens from each composite were then immersed in the two energy drinks (Hype, Red Bull) and artificial saliva (control) for one week and one month. Color was re-assessed after each storage period and ΔE values were calculated. The data were analyzed using the Kruskal Wallis and Mann–Whitney U tests. Results: Filtek Z250 composite showed the highest ΔE irrespective of the solutions at both time points. After seven days and one month, the lowest ΔE values were observed in Filtek Z350XT and Filtek P90 composites immersed in artificial saliva, respectively. The ΔE values of Filtek Z250 and Z350XT composites induced by Red Bull and Hype energy drinks were not significantly different. Discoloration of Filtek P90 was higher in Red Bull energy drink at both time points. Conclusions: Prolonged immersion time in all three solutions increased ΔE values of all composites. However, the ΔE values were within the clinically acceptable range (<3.3) at both time points. PMID:28127318
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roper, J; Bradshaw, B; Godette, K
Purpose: To create a knowledge-based algorithm for prostate LDR brachytherapy treatment planning that standardizes plan quality using seed arrangements tailored to individual physician preferences while being fast enough for real-time planning. Methods: A dataset of 130 prior cases was compiled for a physician with an active prostate seed implant practice. Ten cases were randomly selected to test the algorithm. Contours from the 120 library cases were registered to a common reference frame. Contour variations were characterized on a point by point basis using principle component analysis (PCA). A test case was converted to PCA vectors using the same process andmore » then compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. The seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Computational time was recorded. Any subsequent modifications were recorded that required input from a treatment planner to achieve an acceptable plan. Results: The computational time required to register contours from a test case and evaluate PCA similarity across the library was approximately 10s. Five of the ten test cases did not require any seed additions, deletions, or moves to obtain an acceptable plan. The remaining five test cases required on average 4.2 seed modifications. The time to complete manual plan modifications was less than 30s in all cases. Conclusion: A knowledge-based treatment planning algorithm was developed for prostate LDR brachytherapy based on principle component analysis. Initial results suggest that this approach can be used to quickly create treatment plans that require few if any modifications by the treatment planner. In general, test case plans have seed arrangements which are very similar to prior cases, and thus are inherently tailored to physician preferences.« less
Real-time global illumination on mobile device
NASA Astrophysics Data System (ADS)
Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.
2014-02-01
We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.
NASA Astrophysics Data System (ADS)
Kim, Joon Hyun; Kwon, Woo Jin; Shin, Yong-Il
2016-05-01
In a recent experiment, it was found that the dissipative evolution of a corotating vortex pair in a trapped Bose-Einstein condensate is well described by a point vortex model with longitudinal friction on the vortex motion and the thermal friction coefficient was determined as a function of sample temperature. In this poster, we present a numerical study on the relaxation of 2D superfluid turbulence based on the dissipative point vortex model. We consider a homogeneous system in a cylindrical trap having randomly distributed vortices and implement the vortex-antivortex pair annihilation by removing a pair when its separation becomes smaller than a certain threshold value. We characterize the relaxation of the turbulent vortex states with the decay time required for the vortex number to be reduced to a quarter of initial number. We find the vortex decay time is inversely proportional to the thermal friction coefficient. In particular, we observe the decay times obtained from this work show good quantitative agreement with the experimental results in, indicating that in spite of its simplicity, the point vortex model reasonably captures the physics in the relaxation dynamics of the real system.
NASA Astrophysics Data System (ADS)
Hammond, Emily; Dilger, Samantha K. N.; Stoyles, Nicholas; Judisch, Alexandra; Morgan, John; Sieren, Jessica C.
2015-03-01
Recent growth of genetic disease models in swine has presented the opportunity to advance translation of developed imaging protocols, while characterizing the genotype to phenotype relationship. Repeated imaging with multiple clinical modalities provides non-invasive detection, diagnosis, and monitoring of disease to accomplish these goals; however, longitudinal scanning requires repeatable and reproducible positioning of the animals. A modular positioning unit was designed to provide a fixed, stable base for the anesthetized animal through transit and imaging. Post ventilation and sedation, animals were placed supine in the unit and monitored for consistent vitals. Comprehensive imaging was performed with a computed tomography (CT) chest-abdomen-pelvis scan at each screening time point. Longitudinal images were rigidly registered, accounting for rotation, translation, and anisotropic scaling, and the skeleton was isolated using a basic thresholding algorithm. Assessment of alignment was quantified via eleven pairs of corresponding points on the skeleton with the first time point as the reference. Results were obtained with five animals over five screening time points. The developed unit aided in skeletal alignment within an average of 13.13 +/- 6.7 mm for all five subjects providing a strong foundation for developing qualitative and quantitative methods of disease tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, R; Zhu, X; Li, S
Purpose: High Dose Rate (HDR) brachytherapy forward planning is principally an iterative process; hence, plan quality is affected by planners’ experiences and limited planning time. Thus, this may lead to sporadic errors and inconsistencies in planning. A statistical tool based on previous approved clinical treatment plans would help to maintain the consistency of planning quality and improve the efficiency of second checking. Methods: An independent dose calculation tool was developed from commercial software. Thirty-three previously approved cervical HDR plans with the same prescription dose (550cGy), applicator type, and treatment protocol were examined, and ICRU defined reference point doses (bladder, vaginalmore » mucosa, rectum, and points A/B) along with dwell times were collected. Dose calculation tool then calculated appropriate range with a 95% confidence interval for each parameter obtained, which would be used as the benchmark for evaluation of those parameters in future HDR treatment plans. Model quality was verified using five randomly selected approved plans from the same dataset. Results: Dose variations appears to be larger at the reference point of bladder and mucosa as compared with rectum. Most reference point doses from verification plans fell between the predicted range, except the doses of two points of rectum and two points of reference position A (owing to rectal anatomical variations & clinical adjustment in prescription points, respectively). Similar results were obtained for tandem and ring dwell times despite relatively larger uncertainties. Conclusion: This statistical tool provides an insight into clinically acceptable range of cervical HDR plans, which could be useful in plan checking and identifying potential planning errors, thus improving the consistency of plan quality.« less
A quantitative evaluation of two methods for preserving hair samples
Roon, David A.; Waits, L.P.; Kendall, K.C.
2003-01-01
Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.
Testing single point incremental forming molds for thermoforming operations
NASA Astrophysics Data System (ADS)
Afonso, Daniel; de Sousa, Ricardo Alves; Torcato, Ricardo
2016-10-01
Low pressure polymer processing processes as thermoforming or rotational molding use much simpler molds then high pressure processes like injection. However, despite the low forces involved with the process, molds manufacturing for this operations is still a very material, energy and time consuming operation. The goal of the research is to develop and validate a method for manufacturing plastically formed sheets metal molds by single point incremental forming (SPIF) operation for thermoforming operation. Stewart platform based SPIF machines allow the forming of thick metal sheets, granting the required structural stiffness for the mold surface, and keeping the short lead time manufacture and low thermal inertia.
Craciun, Catrinel; Schüz, Natalie; Lippke, Sonia; Schwarzer, Ralf
2012-03-01
This study compares a motivational skin cancer prevention approach with a volitional planning and self-efficacy intervention to enhance regular sunscreen use. A randomized controlled trial (RCT) was conducted with 205 women (mean age 25 years) in three groups: motivational; volitional; and control. Sunscreen use, action planning, coping planning and coping self-efficacy were assessed at three points in time. The volitional intervention improved sunscreen use. Coping planning emerged as the only mediator between the intervention and sunscreen use at Time 3. Findings point to the role played by coping planning as an ingredient of sun protection interventions.
Shi, Y; Qi, F; Xue, Z; Chen, L; Ito, K; Matsuo, H; Shen, D
2008-04-01
This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. There are two novelties in the proposed deformable model. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel. Second, the deformable contour is constrained by both population-based and patient-specific shape statistics, and it yields more robust and accurate segmentation of lung fields for serial chest radiographs. In particular, for segmenting the initial time-point images, the population-based shape statistics is used to constrain the deformable contour; as more subsequent images of the same patient are acquired, the patient-specific shape statistics online collected from the previous segmentation results gradually takes more roles. Thus, this patient-specific shape statistics is updated each time when a new segmentation result is obtained, and it is further used to refine the segmentation results of all the available time-point images. Experimental results show that the proposed method is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.
NASA Astrophysics Data System (ADS)
Lokoshchenko, A.; Teraud, W.
2018-04-01
The work describes an experimental research of creep of cylindrical tensile test specimens made of aluminum alloy D16T at a constant temperature of 400°C. The issue to be examined was the necking at different values of initial tensile stresses. The use of a developed noncontacting measuring system allowed us to see variations in the specimen shape and to estimate the true stress in various times. Based on the obtained experimental data, several criteria were proposed for describing the point of time at which the necking occurs (necking point). Calculations were carried out at various values of the parameters in these criteria. The relative interval of deformation time in which the test specimen is uniformly stretched was also determined.
Fast maximum likelihood estimation using continuous-time neural point process models.
Lepage, Kyle Q; MacDonald, Christopher J
2015-06-01
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.
A physical anthropomorphic phantom of a one year old child with real-time dosimetry
NASA Astrophysics Data System (ADS)
Bower, Mark William
A physical heterogeneous phantom has been created with epoxy resin based tissue substitutes. The phantom is based on the Cristy and Eckerman mathematical phantom which in turn is a modification of the Medical Internal Radiation Dose (MIRD) model of a one-year-old child as presented by the Society of Nuclear Medicine. The Cristy and Eckerman mathematical phantom, and the physical phantom, are comprised of three different tissue types: bone, lung tissue and soft tissue. The bone tissue substitute is a homogenous mixture of bone tissues: active marrow, inactive marrow, trabecular bone, and cortical bone. Soft tissue organs are represented by a homogeneous soft tissue substitute at a particular location. Point doses were measured within the phantom with a Metal Oxide Semiconductor Field Effect Transistor (MOSFET)- based Patient Dose Verification System modified from the original radiotherapy application. The system features multiple dosimeters that are used to monitor entrance or exit skin doses and intracavity doses in the phantom in real-time. Two different MOSFET devices were evaluated: the typical therapy MOSFET and a developmental MOSFET device that has an oxide layer twice as thick as the therapy MOSFET thus making it of higher sensitivity. The average sensitivity (free-in-air, including backscatter) of the 'high-sensitivity' MOSFET dosimeters ranged from 1.15×105 mV per C kg-1 (29.7 mV/R) to 1.38×105 mV per C kg-1 (35.7 mV/R) depending on the energy of the x-ray field. The integrated physical phantom was utilized to obtain point measurements of the absorbed dose from diagnostic x-ray examinations. Organ doses were calculated based on these point dose measurements. The phantom dosimetry system functioned well providing real-time measurement of the dose to particular organs. The system was less reliable at low doses where the main contribution to the dose was from scattered radiation. The system also was of limited utility for determining the absorbed dose in larger systems such as the skeleton. The point dose method of estimating the organ dose to large disperse organs such as this are of questionable accuracy since only a limited number of points are measured in a field with potentially large exposure variations. The MOSFET system was simple to use and considerably faster than traditional thermoluminescent dosimetry. The one-year-old simulated phantom with the real-time MOSFET dosimeters provides a method to easily evaluate the risk to a previously understudied population from diagnostic radiographic procedures.
Human action recognition based on spatial-temporal descriptors using key poses
NASA Astrophysics Data System (ADS)
Hu, Shuo; Chen, Yuxin; Wang, Huaibao; Zuo, Yaqing
2014-11-01
Human action recognition is an important area of pattern recognition today due to its direct application and need in various occasions like surveillance and virtual reality. In this paper, a simple and effective human action recognition method is presented based on the key poses of human silhouette and the spatio-temporal feature. Firstly, the contour points of human silhouette have been gotten, and the key poses are learned by means of K-means clustering based on the Euclidean distance between each contour point and the centre point of the human silhouette, and then the type of each action is labeled for further match. Secondly, we obtain the trajectories of centre point of each frame, and create a spatio-temporal feature value represented by W to describe the motion direction and speed of each action. The value W contains the information of location and temporal order of each point on the trajectories. Finally, the matching stage is performed by comparing the key poses and W between training sequences and test sequences, the nearest neighbor sequences is found and its label supplied the final result. Experiments on the public available Weizmann datasets show the proposed method can improve accuracy by distinguishing amphibious poses and increase suitability for real-time applications by reducing the computational cost.
d'Alquen, Daniela; De Boeck, Kris; Bradley, Judy; Vávrová, Věra; Dembski, Birgit; Wagner, Thomas O F; Pfalz, Annette; Hebestreit, Helge
2012-02-06
The European Centres of Reference Network for Cystic Fibrosis (ECORN-CF) established an Internet forum which provides the opportunity for CF patients and other interested people to ask experts questions about CF in their mother language. The objectives of this study were to: 1) develop a detailed quality assessment tool to analyze quality of expert answers, 2) evaluate the intra- and inter-rater agreement of this tool, and 3) explore changes in the quality of expert answers over the time frame of the project. The quality assessment tool was developed by an expert panel. Five experts within the ECORN-CF project used the quality assessment tool to analyze the quality of 108 expert answers published on ECORN-CF from six language zones. 25 expert answers were scored at two time points, one year apart. Quality of answers was also assessed at an early and later period of the project. Individual rater scores and group mean scores were analyzed for each expert answer. A scoring system and training manual were developed analyzing two quality categories of answers: content and formal quality. For content quality, the grades based on group mean scores for all raters showed substantial agreement between two time points, however this was not the case for the grades based on individual rater scores. For formal quality the grades based on group mean scores showed only slight agreement between two time points and there was also poor agreement between time points for the individual grades. The inter-rater agreement for content quality was fair (mean kappa value 0.232 ± 0.036, p < 0.001) while only slight agreement was observed for the grades of the formal quality (mean kappa value 0.105 ± 0.024, p < 0.001). The quality of expert answers was rated high (four language zones) or satisfactory (two language zones) and did not change over time. The quality assessment tool described in this study was feasible and reliable when content quality was assessed by a group of raters. Within ECORN-CF, the tool will help ensure that CF patients all over Europe have equal possibility of access to high quality expert advice on their illness. © 2012 d’Alquen et al; licensee BioMed Central Ltd.
NASA Technical Reports Server (NTRS)
Levine, Jack; Rumsey, Charles B.
1958-01-01
The aerodynamic heat transfer to a hemispherical concave nose has been measured in free flight at Mach numbers from 3.5 to 6.6 with corresponding Reynolds numbers based on nose diameter from 7.4 x 10(exp 6) to 14 x 10(exp 6). Over the test Mach number range the heating on the cup nose, expressed as a ratio to the theoretical stagnation-point heating on a hemisphere nose of the same diameter, varied from 0.05 to 0.13 at the stagnation point of the cup, was approximately 0.1 at other locations within 40 deg of the stagnation point, and varied from 0.6 to 0.8 just inside the lip where the highest heating rates occurred. At a Mach number of 5 the total heat input integrated over the surface of the cup nose including the lip was 0.55 times the theoretical value for a hemisphere nose with laminar boundary layer and 0.76 times that for a flat face. The heating at the stagnation point was approximately 1/5 as great as steady-flow tunnel results. Extremely high heating rates at the stagnation point (on the order of 30 times the stagnation-point values of the present test), which have occurred in conjunction with unsteady oscillatory flow around cup noses in wind-tunnel tests at Mach and Reynolds numbers within the present test range, were not observed.
NASA Astrophysics Data System (ADS)
Moskal, P.; Zoń, N.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kamińska, D.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Raczyński, L.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Zieliński, M.
2015-03-01
A novel method of hit time and hit position reconstruction in scintillator detectors is described. The method is based on comparison of detector signals with results stored in a library of synchronized model signals registered for a set of well-defined positions of scintillation points. The hit position is reconstructed as the one corresponding to the signal from the library which is most similar to the measurement signal. The time of the interaction is determined as a relative time between the measured signal and the most similar one in the library. A degree of similarity of measured and model signals is defined as the distance between points representing the measurement- and model-signal in the multi-dimensional measurement space. Novelty of the method lies also in the proposed way of synchronization of model signals enabling direct determination of the difference between time-of-flights (TOF) of annihilation quanta from the annihilation point to the detectors. The introduced method was validated using experimental data obtained by means of the double strip prototype of the J-PET detector and 22Na sodium isotope as a source of annihilation gamma quanta. The detector was built out from plastic scintillator strips with dimensions of 5 mm×19 mm×300 mm, optically connected at both sides to photomultipliers, from which signals were sampled by means of the Serial Data Analyzer. Using the introduced method, the spatial and TOF resolution of about 1.3 cm (σ) and 125 ps (σ) were established, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beltran, C; Kamal, H
Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less
A revised timescale for human evolution based on ancient mitochondrial genomes
Johnson, Philip L.F.; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G.; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes
2016-01-01
Summary Background Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Result Here we use mitochondrial genome sequences from 10 securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) of less than 62,000-95,000 years ago. Conclusion Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population split times, they can provide valid upper bounds; our results exclude most of the older dates for African and non-African split times recently suggested by de novo mutation rate estimates in the nuclear genome. PMID:23523248
Real-time speech encoding based on Code-Excited Linear Prediction (CELP)
NASA Technical Reports Server (NTRS)
Leblanc, Wilfrid P.; Mahmoud, S. A.
1988-01-01
This paper reports on the work proceeding with regard to the development of a real-time voice codec for the terrestrial and satellite mobile radio environments. The codec is based on a complexity reduced version of code-excited linear prediction (CELP). The codebook search complexity was reduced to only 0.5 million floating point operations per second (MFLOPS) while maintaining excellent speech quality. Novel methods to quantize the residual and the long and short term model filters are presented.
Jahan, Shah
2018-01-01
This study was aimed to find histological changes in the extrahepatic organs, hepatic iron deposition, and gene expression of some iron regulatory proteins in rats after sterile muscle abscess during the acute intoxication of Nerium oleander leaves decoction. 10 ml/kg of the leaves extract was injected intramuscularly in Wistar rats (200–225 g, n = 4). Control animals received saline injection of matched volume. Animals were anesthetized and sacrificed after 3, 6, 12, and 24 h after administration of decoction. Lungs, kidney, spleen, and liver were extracted and processed for histopathological examination while portion of liver tissue was proceeded for iron regulatory gene expression quantification. Sections of all studied organs were found with signs of cellular dysfunction with infiltration of variety of leucocytes. In the lungs section at 3 h time point mononuclear cell infiltrates were observed while in alveolar tissue at 24 h time point dilation and even collapse in some of the alveoli were evident. In kidney sections distortion of renal tubules and epithelial cells with shrinkage of glomeruli was noted at all studied time points. In the splenic section of 12 h time point, degeneration, depopulation, and shrinkage of white pulp have been noted. Distension of the red pulp along with activation of splenic follicles was evident after 24 h onset of APR. Significant changes in the expression of acute phase cytokine and iron regulatory genes were noted. IL-6 and Hepc gene expression were strongly upregulated up to 12 h whereby Tf gene expression showed an early upregulation at 3 h time point followed by downregulation on later points while Hjv gene expression showed an overall downregulation at all study time points compared to control. It is concluded that inherent toxins present in the N. oleander can induce acute phase response and cause severe histological changes in the organs and marked changes in the regulation of iron regulatory proteins thus cannot be practiced routinely. PMID:29850455
Yock, Adam D.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Kudchadker, Rajat J.; Court, Laurence E.
2014-01-01
Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear, and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design. PMID:25086518
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind
2014-08-15
Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear,more » and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design.« less
Comprehensive seismic monitoring of the Cascadia megathrust with real-time GPS
NASA Astrophysics Data System (ADS)
Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C. W.; Webb, F.
2013-12-01
We have developed a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone based on 1- and 5-second point position estimates computed within the ITRF08 reference frame. A Kalman filter stream editor that uses a geometry-free combination of phase and range observables to speed convergence while also producing independent estimation of carrier phase biases and ionosphere delay pre-cleans raw satellite measurements. These are then analyzed with GIPSY-OASIS using satellite clock and orbit corrections streamed continuously from the International GNSS Service (IGS) and the German Aerospace Center (DLR). The resulting RMS position scatter is less than 3 cm, and typical latencies are under 2 seconds. Currently 31 coastal Washington, Oregon, and northern California stations from the combined PANGA and PBO networks are analyzed. We are now ramping up to include all of the remaining 400+ stations currently operating throughout the Cascadia subduction zone, all of which are high-rate and telemetered in real-time to CWU. These receivers span the M9 megathrust, M7 crustal faults beneath population centers, several active Cascades volcanoes, and a host of other hazard sources. To use the point position streams for seismic monitoring, we have developed an inter-process client communication package that captures, buffers and re-broadcasts real-time positions and covariances to a variety of seismic estimation routines running on distributed hardware. An aggregator ingests, re-streams and can rebroadcast up to 24 hours of point-positions and resultant seismic estimates derived from the point positions to application clients distributed across web. A suite of seismic monitoring applications has also been written, which includes position time series analysis, instantaneous displacement vectors, and peak ground displacement contouring and mapping. We have also implemented a continuous estimation of finite-fault slip along the Cascadia megathrust using a NIF-type approach. This currently operates on the terrestrial GPS data streams, but could readily be expanded to use real-time offshore geodetic measurements as well. The continuous slip distributions are used in turn to compute tsunami excitation and, when convolved with pre-computed, hydrodynamic Green functions calculated using the COMCOT tsunami modeling software, run-up estimates for the entire Cascadia coastal margin. Finally, a suite of data visualization tools has been written to allow interaction with the real-time position streams and seismic estimates based on them, including time series plotting, instantaneous offset vectors, peak ground deformation contouring, finite-fault inversions, and tsunami run-up. This suite is currently bundled within a single client written in JAVA, called ';GPS Cockpit,' which is available for download.
Francis A. Roesch
2012-01-01
In the past, the goal of forest inventory was to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design included selection probabilities based on land area observed at a discrete point in time. That is, time was not...
Predicting the Risk of Attrition for Undergraduate Students with Time Based Modelling
ERIC Educational Resources Information Center
Chai, Kevin E. K.; Gibson, David
2015-01-01
Improving student retention is an important and challenging problem for universities. This paper reports on the development of a student attrition model for predicting which first year students are most at-risk of leaving at various points in time during their first semester of study. The objective of developing such a model is to assist…
ERIC Educational Resources Information Center
Jauhiainen, Arto; Jauhiainen, Annukka; Laiho, Anne; Lehto, Reeta
2015-01-01
This article explores how the university workers of two Finnish universities experienced the range of neoliberal policymaking and governance reforms implemented in the 2000s. These reforms include quality assurance, system of defined annual working hours, outcome-based salary system and work time allocation system. Our point of view regarding…
ERIC Educational Resources Information Center
Kalil, Ariel; Ziol-Guest, Kathleen M.; Coley, Rebekah Levine
2005-01-01
Based on adolescent mothers' reports, longitudinal patterns of involvement of young, unmarried biological fathers (n=77) in teenage-mother families using cluster analytic techniques were examined. Approximately one third of fathers maintained high levels of involvement over time, another third demonstrated low involvement at both time points, and…
Towards a Rhetoric of On-line Tutoring.
ERIC Educational Resources Information Center
Coogan, David
Electronic mail-based tutoring of undergraduate writing students upsets the temporal basis of the face-to-face paradigm for writing tutorials. Taking place in real time in a specified place, the face-to-face tutorial session has a beginning, middle and end. Further, the session must have a tangible point. By contrast, in on-line tutoring, time is…
ERIC Educational Resources Information Center
Heuston, Edward Benjamin Hull
2010-01-01
Academic learning time (ALT) has long had the theoretical underpinnings sufficient to claim a causal relationship with academic achievement, but to this point empirical evidence has been lacking. This dearth of evidence has existed primarily due to difficulties associated with operationalizing ALT in traditional educational settings. Recent…
Situational Interest and Academic Achievement in the Active-Learning Classroom
ERIC Educational Resources Information Center
Rotgans, Jerome I.; Schmidt, Henk G.
2011-01-01
The aim of the present study was to investigate how situational interest develops over time and how it is related to academic achievement in an active-learning classroom. Five measures of situational interest were administered at critical points in time to 69 polytechnic students during a one-day, problem-based learning session. Results revealed…
Running DNA Mini-Gels in 20 Minutes or Less Using Sodium Boric Acid Buffer
ERIC Educational Resources Information Center
Jenkins, Kristin P.; Bielec, Barbara
2006-01-01
Providing a biotechnology experience for students can be challenging on several levels, and time is a real constraint for many experiments. Many DNA based methods require a gel electrophoresis step, and although some biotechnology procedures have convenient break points, gel electrophoresis does not. In addition to the time required for loading…
The Processing of Singular and Plural Nouns in French and English
ERIC Educational Resources Information Center
New, Boris; Brysbaert, Marc; Segui, Juan; Ferrand, Ludovic; Rastle, Kathleen
2004-01-01
Contradictory data have been obtained about the processing of singular and plural nouns in Dutch and English. Whereas the Dutch findings point to an influence of the base frequency of the singular and the plural word forms on lexical decision times (Baayen, Dijkstra, & Schreuder, 1997), the English reaction times depend on the surface frequency of…
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-06-17
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-01-01
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279
NASA Astrophysics Data System (ADS)
Gambi, J. M.; García del Pino, M. L.; Gschwindl, J.; Weinmüller, E. B.
2017-12-01
This paper deals with the problem of throwing middle-sized low Earth orbit debris objects into the atmosphere via laser ablation. The post-Newtonian equations here provided allow (hypothetical) space-based acquisition, pointing and tracking systems endowed with very narrow laser beams to reach the pointing accuracy presently prescribed. In fact, whatever the orbital elements of these objects may be, these equations will allow the operators to account for the corrections needed to balance the deviations of the line of sight directions due to the curvature of the paths the laser beams are to travel along. To minimize the respective corrections, the systems will have to perform initial positioning manoeuvres, and the shooting point-ahead angles will have to be adapted in real time. The enclosed numerical experiments suggest that neglecting these measures will cause fatal errors, due to differences in the actual locations of the objects comparable to their size.
Harb, Afif; von Horn, Alexander; Gocalek, Kornelia; Schäck, Luisa Marilena; Clausen, Jan; Krettek, Christian; Noack, Sandra; Neunaber, Claudia
2017-07-01
Due to the rising interest in Europe to treat large cartilage defects with osteochondrale allografts, research aims to find a suitable solution for long-term storage of osteochondral allografts. This is further encouraged by the fact that legal restrictions currently limit the use of the ingredients from animal or human sources that are being used in other regions of the world (e.g. in the USA). Therefore, the aim of this study was A) to analyze if a Lactated Ringer (LR) based solution is as efficient as a Dulbecco modified Eagle's minimal essential medium (DMEM) in maintaining chondrocyte viability and B) at which storage temperature (4°C vs. 37°C) chondrocyte survival of the osteochondral allograft is optimally sustained. 300 cartilage grafts were collected from knees of ten one year-old Black Head German Sheep. The grafts were stored in four different storage solutions (one of them DMEM-based, the other three based on Lactated Ringer Solution), at two different temperatures (4 and 37°C) for 14 and 56days. At both points in time, chondrocyte survival as well as death rate, Glycosaminoglycan (GAG) content, and Hydroxyproline (HP) concentration were measured and compared between the grafts stored in the different solutions and at the different temperatures. Independent of the storage solutions tested, chondrocyte survival rates were higher when stored at 4°C compared to storage at 37°C both after short-term (14days) and long-term storage (56days). At no point in time did the DMEM-based solution show a superior chondrocyte survival compared to lactated Ringer based solution. GAG and HP content were comparable across all time points, temperatures and solutions. LR based solutions that contain only substances that are approved in Germany may be just as efficient for storing grafts as the USA DMEM-based solution gold standard. Moreover, in the present experiment storage of osteochondral allografts at 4°C was superior to storage at 37°C. Copyright © 2017 Elsevier Ltd. All rights reserved.
Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view
NASA Astrophysics Data System (ADS)
Cao, Tam P.; Deng, Guang; Elton, Darrell
2009-02-01
In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.
Pseudo-random bit generator based on lag time series
NASA Astrophysics Data System (ADS)
García-Martínez, M.; Campos-Cantón, E.
2014-12-01
In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.
Application of Time-Frequency Representations To Non-Stationary Radar Cross Section
2009-03-01
The three- dimensional plot produced by a TFR allows one to determine which spectral components of a signal vary with time [25... a range bin ( of width cT 2 ) from the stepped frequency waveform. 2. Cancel the clutter (stationary components) by zeroing out points associated with ...generating an infinite number of bilinear Time Frequency distributions based on a generalized equation and a change- able
A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising
NASA Astrophysics Data System (ADS)
Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua
2018-04-01
In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.
Correlation and 3D-tracking of objects by pointing sensors
Griesmeyer, J. Michael
2017-04-04
A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.
Real-Time Wireless Data Acquisition System
NASA Technical Reports Server (NTRS)
Valencia, Emilio J.; Perotti, Jose; Lucena, Angel; Mata, Carlos
2007-01-01
Current and future aerospace requirements demand the creation of a new breed of sensing devices, with emphasis on reduced weight, power consumption, and physical size. This new generation of sensors must possess a high degree of intelligence to provide critical data efficiently and in real-time. Intelligence will include self-calibration, self-health assessment, and pre-processing of raw data at the sensor level. Most of these features are already incorporated in the Wireless Sensors Network (SensorNet(TradeMark)), developed by the Instrumentation Group at Kennedy Space Center (KSC). A system based on the SensorNet(TradeMark) architecture consists of data collection point(s) called Central Stations (CS) and intelligent sensors called Remote Stations (RS) where one or more CSs can be accommodated depending on the specific application. The CS's major function is to establish communications with the Remote Stations and to poll each RS for data and health information. The CS also collects, stores and distributes these data to the appropriate systems requiring the information. The system has the ability to perform point-to-point, multi-point and relay mode communications with an autonomous self-diagnosis of each communications link. Upon detection of a communication failure, the system automatically reconfigures to establish new communication paths. These communication paths are automatically and autonomously selected as the best paths by the system based on the existing operating environment. The data acquisition system currently under development at KSC consists of the SensorNet(TradeMark) wireless sensors as the remote stations and the central station called the Radio Frequency Health Node (RFHN). The RFF1N is the central station which remotely communicates with the SensorNet(TradeMark) sensors to control them and to receive data. The system's salient feature is the ability to provide deterministic sensor data with accurate time stamps for both time critical and non-time critical applications. Current wireless standards such as Zigbee(TradeMark) and Bluetooth(Registered TradeMark) do not have these capabilities and can not meet the needs that are provided by the SensorNet technology. Additionally, the system has the ability to automatically reconfigure the wireless communication link to a secondary frequency if interference is encountered and can autonomously search for a sensor that was perceived to be lost using the relay capabilities of the sensors and the secondary frequency. The RFHN and the SensorNet designs are based on modular architectures that allow for future increases in capability and the ability to expand or upgrade with relative ease. The RFHN and SensorNet sensors .can also perform data processing which forms a distributed processing architecture allowing the system to pass along information rather than just sending "raw data points" to the next higher level system. With a relatively small size, weight and power consumption, this system has the potential for both spacecraft and aircraft applications as well as ground applications that require time critical data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soufi, M; Arimura, H; Toyofuku, F
Purpose: To propose a computerized framework for localization of anatomical feature points on the patient surface in infrared-ray based range images by using differential geometry (curvature) features. Methods: The general concept was to reconstruct the patient surface by using a mathematical modeling technique for the computation of differential geometry features that characterize the local shapes of the patient surfaces. A region of interest (ROI) was firstly extracted based on a template matching technique applied on amplitude (grayscale) images. The extracted ROI was preprocessed for reducing temporal and spatial noises by using Kalman and bilateral filters, respectively. Next, a smooth patientmore » surface was reconstructed by using a non-uniform rational basis spline (NURBS) model. Finally, differential geometry features, i.e. the shape index and curvedness features were computed for localizing the anatomical feature points. The proposed framework was trained for optimizing shape index and curvedness thresholds and tested on range images of an anthropomorphic head phantom. The range images were acquired by an infrared ray-based time-of-flight (TOF) camera. The localization accuracy was evaluated by measuring the mean of minimum Euclidean distances (MMED) between reference (ground truth) points and the feature points localized by the proposed framework. The evaluation was performed for points localized on convex regions (e.g. apex of nose) and concave regions (e.g. nasofacial sulcus). Results: The proposed framework has localized anatomical feature points on convex and concave anatomical landmarks with MMEDs of 1.91±0.50 mm and 3.70±0.92 mm, respectively. A statistically significant difference was obtained between the feature points on the convex and concave regions (P<0.001). Conclusion: Our study has shown the feasibility of differential geometry features for localization of anatomical feature points on the patient surface in range images. The proposed framework might be useful for tasks involving feature-based image registration in range-image guided radiation therapy.« less
Cross-modal decoupling in temporal attention.
Mühlberg, Stefanie; Oriolo, Giovanni; Soto-Faraco, Salvador
2014-06-01
Prior studies have repeatedly reported behavioural benefits to events occurring at attended, compared to unattended, points in time. It has been suggested that, as for spatial orienting, temporal orienting of attention spreads across sensory modalities in a synergistic fashion. However, the consequences of cross-modal temporal orienting of attention remain poorly understood. One challenge is that the passage of time leads to an increase in event predictability throughout a trial, thus making it difficult to interpret possible effects (or lack thereof). Here we used a design that avoids complete temporal predictability to investigate whether attending to a sensory modality (vision or touch) at a point in time confers beneficial access to events in the other, non-attended, sensory modality (touch or vision, respectively). In contrast to previous studies and to what happens with spatial attention, we found that events in one (unattended) modality do not automatically benefit from happening at the time point when another modality is expected. Instead, it seems that attention can be deployed in time with relative independence for different sensory modalities. Based on these findings, we argue that temporal orienting of attention can be cross-modally decoupled in order to flexibly react according to the environmental demands, and that the efficiency of this selective decoupling unfolds in time. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Automated DBS microsampling, microscale automation and microflow LC-MS for therapeutic protein PK.
Zhang, Qian; Tomazela, Daniela; Vasicek, Lisa A; Spellman, Daniel S; Beaumont, Maribel; Shyong, BaoJen; Kenny, Jacqueline; Fauty, Scott; Fillgrove, Kerry; Harrelson, Jane; Bateman, Kevin P
2016-04-01
Reduce animal usage for discovery-stage PK studies for biologics programs using microsampling-based approaches and microscale LC-MS. We report the development of an automated DBS-based serial microsampling approach for studying the PK of therapeutic proteins in mice. Automated sample preparation and microflow LC-MS were used to enable assay miniaturization and improve overall assay throughput. Serial sampling of mice was possible over the full 21-day study period with the first six time points over 24 h being collected using automated DBS sample collection. Overall, this approach demonstrated comparable data to a previous study using single mice per time point liquid samples while reducing animal and compound requirements by 14-fold. Reduction in animals and drug material is enabled by the use of automated serial DBS microsampling for mice studies in discovery-stage studies of protein therapeutics.
Latash, M L; Gottlieb, G L
1991-09-01
We describe a model for the regulation of fast, single-joint movements, based on the equilibrium-point hypothesis. Limb movement follows constant rate shifts of independently regulated neuromuscular variables. The independently regulated variables are tentatively identified as thresholds of a length sensitive reflex for each of the participating muscles. We use the model to predict EMG patterns associated with changes in the conditions of movement execution, specifically, changes in movement times, velocities, amplitudes, and moments of limb inertia. The approach provides a theoretical neural framework for the dual-strategy hypothesis, which considers certain movements to be results of one of two basic, speed-sensitive or speed-insensitive strategies. This model is advanced as an alternative to pattern-imposing models based on explicit regulation of timing and amplitudes of signals that are explicitly manifest in the EMG patterns.
Zhang, Jiachao; Hu, Qisong; Xu, Chuanbiao; Liu, Sixin; Li, Congfa
2016-01-01
Pepper pericarp microbiota plays an important role in the pepper peeling process for the production of white pepper. We collected pepper samples at different peeling time points from Hainan Province, China, and used a metagenomic approach to identify changes in the pericarp microbiota based on functional gene analysis. UniFrac distance-based principal coordinates analysis revealed significant changes in the pericarp microbiota structure during peeling, which were attributed to increases in bacteria from the genera Selenomonas and Prevotella. We identified 28 core operational taxonomic units at each time point, mainly belonging to Selenomonas, Prevotella, Megasphaera, Anaerovibrio, and Clostridium genera. The results were confirmed by quantitative polymerase chain reaction. At the functional level, we observed significant increases in microbial features related to acetyl xylan esterase and pectinesterase for pericarp degradation during peeling. These findings offer a new insight into biodegradation for pepper peeling and will promote the development of the white pepper industry.
Key Microbiota Identification Using Functional Gene Analysis during Pepper (Piper nigrum L.) Peeling
Xu, Chuanbiao; Liu, Sixin; Li, Congfa
2016-01-01
Pepper pericarp microbiota plays an important role in the pepper peeling process for the production of white pepper. We collected pepper samples at different peeling time points from Hainan Province, China, and used a metagenomic approach to identify changes in the pericarp microbiota based on functional gene analysis. UniFrac distance-based principal coordinates analysis revealed significant changes in the pericarp microbiota structure during peeling, which were attributed to increases in bacteria from the genera Selenomonas and Prevotella. We identified 28 core operational taxonomic units at each time point, mainly belonging to Selenomonas, Prevotella, Megasphaera, Anaerovibrio, and Clostridium genera. The results were confirmed by quantitative polymerase chain reaction. At the functional level, we observed significant increases in microbial features related to acetyl xylan esterase and pectinesterase for pericarp degradation during peeling. These findings offer a new insight into biodegradation for pepper peeling and will promote the development of the white pepper industry. PMID:27768750
Recurrence plots of discrete-time Gaussian stochastic processes
NASA Astrophysics Data System (ADS)
Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick
2016-09-01
We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.
Monte Carlo Simulation of THz Multipliers
NASA Technical Reports Server (NTRS)
East, J.; Blakey, P.
1997-01-01
Schottky Barrier diode frequency multipliers are critical components in submillimeter and Thz space based earth observation systems. As the operating frequency of these multipliers has increased, the agreement between design predictions and experimental results has become poorer. The multiplier design is usually based on a nonlinear model using a form of harmonic balance and a model for the Schottky barrier diode. Conventional voltage dependent lumped element models do a poor job of predicting THz frequency performance. This paper will describe a large signal Monte Carlo simulation of Schottky barrier multipliers. The simulation is a time dependent particle field Monte Carlo simulation with ohmic and Schottky barrier boundary conditions included that has been combined with a fixed point solution for the nonlinear circuit interaction. The results in the paper will point out some important time constants in varactor operation and will describe the effects of current saturation and nonlinear resistances on multiplier operation.
Welch, Brandon M; Rodriguez-Loya, Salvador; Eilbeck, Karen; Kawamoto, Kensaku
2014-01-01
Whole genome sequence (WGS) information could soon be routinely available to clinicians to support the personalized care of their patients. At such time, clinical decision support (CDS) integrated into the clinical workflow will likely be necessary to support genome-guided clinical care. Nevertheless, developing CDS capabilities for WGS information presents many unique challenges that need to be overcome for such approaches to be effective. In this manuscript, we describe the development of a prototype CDS system that is capable of providing genome-guided CDS at the point of care and within the clinical workflow. To demonstrate the functionality of this prototype, we implemented a clinical scenario of a hypothetical patient at high risk for Lynch Syndrome based on his genomic information. We demonstrate that this system can effectively use service-oriented architecture principles and standards-based components to deliver point of care CDS for WGS information in real-time.
Fleming, Denise H; Mathew, Binu S; Prasanna, Samuel; Annapandian, Vellaichamy M; John, George T
2011-04-01
Enteric-coated mycophenolate sodium (EC-MPS) is widely used in renal transplantation. With a delayed absorption profile, it has not been possible to develop limited sampling strategies to estimate area under the curve (mycophenolic acid [MPA] AUC₀₋₁₂), which have limited time points and are completed in 2 hours. We developed and validated simplified strategies to estimate MPA AUC₀₋₁₂ in an Indian renal transplant population prescribed EC-MPS together with prednisolone and tacrolimus. Intensive pharmacokinetic sampling (17 samples each) was performed in 18 patients to measure MPA AUC₀₋₁₂. The profiles at 1 month were used to develop the simplified strategies and those at 5.5 months used for validation. We followed two approaches. In one, the AUC was calculated using the trapezoidal rule with fewer time points followed by an extrapolation. In the second approach, by stepwise multiple regression analysis, models with different time points were identified and linear regression analysis performed. Using the trapezoidal rule, two equations were developed with six time points and sampling to 6 or 8 hours (8hrAUC[₀₋₁₂exp]) after the EC-MPS dose. On validation, the 8hrAUC(₀₋₁₂exp) compared with total measured AUC₀₋₁₂ had a coefficient of correlation (r²) of 0.872 with a bias and precision (95% confidence interval) of 0.54% (-6.07-7.15) and 9.73% (5.37-14.09), respectively. Second, limited sampling strategies were developed with four, five, six, seven, and eight time points and completion within 2 hours, 4 hours, 6 hours, and 8 hours after the EC-MPS dose. On validation, six, seven, and eight time point equations, all with sampling to 8 hours, had an acceptable r with the total measured MPA AUC₀₋₁₂ (0.817-0.927). In the six, seven, and eight time points, the bias (95% confidence interval) was 3.00% (-4.59 to 10.59), 0.29% (-5.4 to 5.97), and -0.72% (-5.34 to 3.89) and the precision (95% confidence interval) was 10.59% (5.06-16.13), 8.33% (4.55-12.1), and 6.92% (3.94-9.90), respectively. Of the eight simplified approaches, inclusion of seven or eight time points improved the accuracy of the predicted AUC compared with the actual and can be advocated based on the priority of the user.
Urbanová, Petra; Hejna, Petr; Jurda, Mikoláš
2015-05-01
Three-dimensional surface technologies particularly close range photogrammetry and optical surface scanning have recently advanced into affordable, flexible and accurate techniques. Forensic postmortem investigation as performed on a daily basis, however, has not yet fully benefited from their potentials. In the present paper, we tested two approaches to 3D external body documentation - digital camera-based photogrammetry combined with commercial Agisoft PhotoScan(®) software and stereophotogrammetry-based Vectra H1(®), a portable handheld surface scanner. In order to conduct the study three human subjects were selected, a living person, a 25-year-old female, and two forensic cases admitted for postmortem examination at the Department of Forensic Medicine, Hradec Králové, Czech Republic (both 63-year-old males), one dead to traumatic, self-inflicted, injuries (suicide by hanging), the other diagnosed with the heart failure. All three cases were photographed in 360° manner with a Nikon 7000 digital camera and simultaneously documented with the handheld scanner. In addition to having recorded the pre-autopsy phase of the forensic cases, both techniques were employed in various stages of autopsy. The sets of collected digital images (approximately 100 per case) were further processed to generate point clouds and 3D meshes. Final 3D models (a pair per individual) were counted for numbers of points and polygons, then assessed visually and compared quantitatively using ICP alignment algorithm and a cloud point comparison technique based on closest point to point distances. Both techniques were proven to be easy to handle and equally laborious. While collecting the images at autopsy took around 20min, the post-processing was much more time-demanding and required up to 10h of computation time. Moreover, for the full-body scanning the post-processing of the handheld scanner required rather time-consuming manual image alignment. In all instances the applied approaches produced high-resolution photorealistic, real sized or easy to calibrate 3D surface models. Both methods equally failed when the scanned body surface was covered with body hair or reflective moist areas. Still, it can be concluded that single camera close range photogrammetry and optical surface scanning using Vectra H1 scanner represent relatively low-cost solutions which were shown to be beneficial for postmortem body documentation in forensic pathology. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
An office-based emergencies course for third-year dental students.
Wald, David A; Wang, Alvin; Carroll, Gerry; Trager, Jonathan; Cripe, Jane; Curtis, Michael
2013-08-01
Although uncommon, medical emergencies do occur in the dental office setting. This article describes the development and implementation of an office-based emergencies course for third-year dental students. The course reviews the basic management of selected medical emergencies. Background information is provided that further highlights the importance of proper training to manage medical emergencies in the dental office. Details regarding course development, implementation, logistics, and teaching points are highlighted. The article provides a starting point from which dental educators can modify and adapt this course and its objectives to fit their needs or resources. This is a timely topic that should benefit both dental students and dental educators.
Robust Curb Detection with Fusion of 3D-Lidar and Camera Data
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-01-01
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364
NASA Astrophysics Data System (ADS)
Mohanty, Shyama Prasad; Bhargava, Parag
2012-11-01
Nanoparticle loaded quasi solid electrolytes are important from the view point of developing electrolytes for dye sensitized solar cells (DSSCs) having long term stability. The present work shows the influence of isoelectric point of nanopowders in electrolyte on the photoelectrochemical characteristics of DSSCs. Electrolytes with nanopowders of silica, alumina and magnesia which have widely differing isoelectric points are used in the study. Adsorption of ions from the electrolyte on the nanopowder surface, characterized by zeta potential measurement, show that cations get adsorbed on silica, alumina surface while anions get adsorbed on magnesia surface. The electrochemical characteristics of nanoparticulate loaded electrolytes are examined through cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS). DSSCs fabricated using liquid, silica or alumina loaded electrolytes exhibit almost similar performance. But interestingly, the magnesia loaded electrolyte-based cell show lower short circuit current density (JSC) and much higher open circuit voltage (VOC), which is attributed to adsorption of anions. Such anionic adsorption prevents the dark reaction in magnesia loaded electrolyte-based cell and thus, enhances the VOC by almost 100 mV as compared to liquid electrolyte based cell. Also, higher electron life time at the titania/electrolyte interface is observed in magnesia loaded electrolyte-based cell as compared to others.
Tunable elastic parity-time symmetric structure based on the shunted piezoelectric materials
NASA Astrophysics Data System (ADS)
Hou, Zhilin; Assouar, Badreddine
2018-02-01
We theoretically and numerically report on the tunable elastic Parity-Time (PT) symmetric structure based on shunted piezoelectric units. We show that the elastic loss and gain can be archived in piezoelectric materials when they are shunted by external circuits containing positive and negative resistances. We present and discuss, as an example, the strongly dependent relationship between the exceptional points of a three-layered system and the impedance of their external shunted circuit. The achieved results evidence that the PT symmetric structures based on this proposed concept can actively be tuned without any change of their geometric configurations.
An analysis of neural receptive field plasticity by point process adaptive filtering
Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor
2001-01-01
Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043
NASA Astrophysics Data System (ADS)
Rothmund, Sabrina; Niethammer, Uwe; Walter, Marco; Joswig, Manfred
2013-04-01
In recent years, the high-resolution and multi-temporal 3D mapping of the Earth's surface using terrestrial laser scanning (TLS), ground-based optical images and especially low-cost UAV-based aerial images (Unmanned Aerial Vehicle) has grown in importance. This development resulted from the progressive technical improvement of the imaging systems and the freely available multi-view stereo (MVS) software packages. These different methods of data acquisition for the generation of accurate, high-resolution digital surface models (DSMs) were applied as part of an eight-week field campaign at the Super-Sauze landslide (South French Alps). An area of approximately 10,000 m² with long-term average displacement rates greater than 0.01 m/day has been investigated. The TLS-based point clouds were acquired at different viewpoints with an average point spacing between 10 to 40 mm and at different dates. On these days, more than 50 optical images were taken on points along a predefined line on the side part of the landslide by a low-cost digital compact camera. Additionally, aerial images were taken by a radio-controlled mini quad-rotor UAV equipped with another low-cost digital compact camera. The flight altitude ranged between 20 m and 250 m and produced a corresponding ground resolution between 0.6 cm and 7 cm. DGPS measurements were carried out as well in order to geo-reference and validate the point cloud data. To generate unscaled photogrammetric 3D point clouds from a disordered and tilted image set, we use the widespread open-source software package Bundler and PMVS2 (University of Washington). These multi-temporal DSMs are required on the one hand to determine the three-dimensional surface deformations and on the other hand it will be required for differential correction for orthophoto production. Drawing on the example of the acquired data at the Super-Sauze landslide, we demonstrate the potential but also the limitations of the photogrammetric point clouds. To determine the quality of the photogrammetric point cloud, these point clouds are compared with the TLS-based DSMs. The comparison shows that photogrammetric points accuracies are in the range of cm to dm, therefore don't reach the quality of the high-resolution TLS-based DSMs. Further, the validation of the photogrammetric point clouds reveals that some of them have internal curvature effects. The advantage of the photogrammetric 3D data acquisition is the use of low-cost equipment and less time-consuming data collection in the field. While the accuracy of the photogrammetric point clouds is not as high as TLS-based DSMs, the advantages of the former method are seen when applied in areas where dm-range is sufficient.
Development of a Consumable Inventory Management Strategy for the Supply Management Unit
2007-12-01
should be either returned to the supplier for partial credit or sent to disposal. This means that based 52 upon current and projected consumption ...economical to maintain based upon current and projected consumption rates. The magnitude of the feasible excess is driven by the base stock level prescribed...Days of Supply model to establish Requisitioning Objectives (RO) and Reorder Points (ROP), which are based upon historical usage, lead time, and supply
Strategies for satellite-based monitoring of CO2 from distributed area and point sources
NASA Astrophysics Data System (ADS)
Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David
2014-05-01
Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.
Zero-moment point determination of worst-case manoeuvres leading to vehicle wheel lift
NASA Astrophysics Data System (ADS)
Lapapong, S.; Brown, A. A.; Swanson, K. S.; Brennan, S. N.
2012-01-01
This paper proposes a method to evaluate vehicle rollover propensity based on a frequency-domain representation of the zero-moment point (ZMP). Unlike other rollover metrics such as the static stability factor, which is based on the steady-state behaviour, and the load transfer ratio, which requires the calculation of tyre forces, the ZMP is based on a simplified kinematic model of the vehicle and the analysis of the contact point of the vehicle relative to the edge of the support polygon. Previous work has validated the use of the ZMP experimentally in its ability to predict wheel lift in the time domain. This work explores the use of the ZMP in the frequency domain to allow a chassis designer to understand how operating conditions and vehicle parameters affect rollover propensity. The ZMP analysis is then extended to calculate worst-case sinusoidal manoeuvres that lead to untripped wheel lift, and the analysis is tested across several vehicle configurations and compared with that of the standard Toyota J manoeuvre.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Kamalakar, Kotte; Mahesh, Goli; Prasad, Rachapudi B N; Karuna, Mallampalli S L
2015-01-01
Castor oil, a non-edible oil containing hydroxyl fatty acid, ricinoleic acid (89.3 %) was chemically modified employing a two step procedure. The first step involved acylation (C(2)-C(6) alkanoic anhydrides) of -OH functionality employing a green catalyst, Kieselguhr-G and solvent free medium. The catalyst after reaction was filtered and reused several times without loss in activity. The second step is esterification of acylated castor fatty acids with branched mono alcohol, 2-ethylhexanol and polyols namely neopentyl glycol (NPG), trimethylolpropane (TMP) and pentaerythritol (PE) to obtain 16 novel base stocks. The base stocks when evaluated for different lubricant properties have shown very low pour points (-30 to -45°C) and broad viscosity ranges 20.27 cSt to 370.73 cSt, higher viscosity indices (144-171), good thermal and oxidative stabilities, and high weld load capacities suitable for multi-range industrial applications such as hydraulic fluids, metal working fluids, gear oil, forging and aviation applications. The study revealed that acylated branched mono- and polyol esters rich in monounsaturation is desirable for developing low pour point base stocks.
Concrete thawing studied by single-point ramped imaging.
Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W
1997-12-01
A series of two-dimensional images of proton distribution in a hardened concrete sample has been obtained during the thawing process (from -50 degrees C up to 11 degrees C). The SPRITE sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 micros and T1 < 3.6 ms). The relaxation parameters of the sample were determined in order to optimize the time efficiency of the sequence, permitting a 4-scan 64 x 64 acquisition in under 3 min. The image acquisition is fast on the time scale of the temperature evolution of the specimen. The frozen water distribution is quantified through a position based study of the image contrast. A multiple point acquisition method is presented and the signal sensitivity improvement is discussed.
Cluster-Based Multipolling Sequencing Algorithm for Collecting RFID Data in Wireless LANs
NASA Astrophysics Data System (ADS)
Choi, Woo-Yong; Chatterjee, Mainak
2015-03-01
With the growing use of RFID (Radio Frequency Identification), it is becoming important to devise ways to read RFID tags in real time. Access points (APs) of IEEE 802.11-based wireless Local Area Networks (LANs) are being integrated with RFID networks that can efficiently collect real-time RFID data. Several schemes, such as multipolling methods based on the dynamic search algorithm and random sequencing, have been proposed. However, as the number of RFID readers associated with an AP increases, it becomes difficult for the dynamic search algorithm to derive the multipolling sequence in real time. Though multipolling methods can eliminate the polling overhead, we still need to enhance the performance of the multipolling methods based on random sequencing. To that extent, we propose a real-time cluster-based multipolling sequencing algorithm that drastically eliminates more than 90% of the polling overhead, particularly so when the dynamic search algorithm fails to derive the multipolling sequence in real time.
Zhang, Yuji
2015-01-01
Molecular networks act as the backbone of molecular activities within cells, offering a unique opportunity to better understand the mechanism of diseases. While network data usually constitute only static network maps, integrating them with time course gene expression information can provide clues to the dynamic features of these networks and unravel the mechanistic driver genes characterizing cellular responses. Time course gene expression data allow us to broadly "watch" the dynamics of the system. However, one challenge in the analysis of such data is to establish and characterize the interplay among genes that are altered at different time points in the context of a biological process or functional category. Integrative analysis of these data sources will lead us a more complete understanding of how biological entities (e.g., genes and proteins) coordinately perform their biological functions in biological systems. In this paper, we introduced a novel network-based approach to extract functional knowledge from time-dependent biological processes at a system level using time course mRNA sequencing data in zebrafish embryo development. The proposed method was applied to investigate 1α, 25(OH)2D3-altered mechanisms in zebrafish embryo development. We applied the proposed method to a public zebrafish time course mRNA-Seq dataset, containing two different treatments along four time points. We constructed networks between gene ontology biological process categories, which were enriched in differential expressed genes between consecutive time points and different conditions. The temporal propagation of 1α, 25-Dihydroxyvitamin D3-altered transcriptional changes started from a few genes that were altered initially at earlier stage, to large groups of biological coherent genes at later stages. The most notable biological processes included neuronal and retinal development and generalized stress response. In addition, we also investigated the relationship among biological processes enriched in co-expressed genes under different conditions. The enriched biological processes include translation elongation, nucleosome assembly, and retina development. These network dynamics provide new insights into the impact of 1α, 25-Dihydroxyvitamin D3 treatment in bone and cartilage development. We developed a network-based approach to analyzing the DEGs at different time points by integrating molecular interactions and gene ontology information. These results demonstrate that the proposed approach can provide insight on the molecular mechanisms taking place in vertebrate embryo development upon treatment with 1α, 25(OH)2D3. Our approach enables the monitoring of biological processes that can serve as a basis for generating new testable hypotheses. Such network-based integration approach can be easily extended to any temporal- or condition-dependent genomic data analyses.
Using Norm-Based Appeals to Increase Response Rates in Evaluation Research: A Field Experiment
ERIC Educational Resources Information Center
Misra, Shalini; Stokols, Daniel; Marino, Anne Heberger
2012-01-01
A field experiment was conducted to test the effectiveness of norm-based persuasive messages for increasing response rates in online survey research. Participants in an interdisciplinary conference were asked to complete two successive postconference surveys and randomly assigned to one of two groups at each time point. The experimental group…
Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling
ERIC Educational Resources Information Center
Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.
2012-01-01
The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…
Getting Medical Directors Out of the C-Suite and Closer to Points of Care.
Schlosser, Michael
2016-11-01
Physicians, more than anyone else, can influence peers when it comes to talking about evidence-based care, even when it runs counter to customary, but costly, practice patterns. The timing couldn't be better to put physicians in this leadership role because of the growing use of value-based payment models.
Adolescent health-risk behavior and community disorder.
Wiehe, Sarah E; Kwan, Mei-Po; Wilson, Jeff; Fortenberry, J Dennis
2013-01-01
Various forms of community disorder are associated with health outcomes but little is known about how dynamic context where an adolescent spends time relates to her health-related behaviors. Assess whether exposure to contexts associated with crime (as a marker of community disorder) correlates with self-reported health-related behaviors among adolescent girls. Girls (N = 52), aged 14-17, were recruited from a single geographic urban area and monitored for 1 week using a GPS-enabled cell phone. Adolescents completed an audio computer-assisted self-administered interview survey on substance use (cigarette, alcohol, or marijuana use) and sexual intercourse in the last 30 days. In addition to recorded home and school address, phones transmitted location data every 5 minutes (path points). Using ArcGIS, we defined community disorder as aggregated point-level Unified Crime Report data within a 200-meter Euclidian buffer from home, school and each path point. Using Stata, we analyzed how exposures to areas of higher crime prevalence differed among girls who reported each behavior or not. Participants lived and spent time in areas with variable crime prevalence within 200 meters of their home, school and path points. Significant differences in exposure occurred based on home location among girls who reported any substance use or not (p 0.04) and sexual intercourse or not (p 0.01). Differences in exposure by school and path points were only significant among girls reporting any substance use or not (p 0.03 and 0.02, respectively). Exposure also varied by school/non-school day as well as time of day. Adolescent travel patterns are not random. Furthermore, the crime context where an adolescent spends time relates to her health-related behavior. These data may guide policy relating to crime control and inform time- and space-specific interventions to improve adolescent health.
Operationalizing Semantic Medline for meeting the information needs at point of care.
Rastegar-Mojarad, Majid; Li, Dingcheng; Liu, Hongfang
2015-01-01
Scientific literature is one of the popular resources for providing decision support at point of care. It is highly desirable to bring the most relevant literature to support the evidence-based clinical decision making process. Motivated by the recent advance in semantically enhanced information retrieval, we have developed a system, which aims to bring semantically enriched literature, Semantic Medline, to meet the information needs at point of care. This study reports our work towards operationalizing the system for real time use. We demonstrate that the migration of a relational database implementation to a NoSQL (Not only SQL) implementation significantly improves the performance and makes the use of Semantic Medline at point of care decision support possible.
Operationalizing Semantic Medline for meeting the information needs at point of care
Rastegar-Mojarad, Majid; Li, Dingcheng; Liu, Hongfang
2015-01-01
Scientific literature is one of the popular resources for providing decision support at point of care. It is highly desirable to bring the most relevant literature to support the evidence-based clinical decision making process. Motivated by the recent advance in semantically enhanced information retrieval, we have developed a system, which aims to bring semantically enriched literature, Semantic Medline, to meet the information needs at point of care. This study reports our work towards operationalizing the system for real time use. We demonstrate that the migration of a relational database implementation to a NoSQL (Not only SQL) implementation significantly improves the performance and makes the use of Semantic Medline at point of care decision support possible. PMID:26306259
Particle Swarm Optimization of Low-Thrust, Geocentric-to-Halo-Orbit Transfers
NASA Astrophysics Data System (ADS)
Abraham, Andrew J.
Missions to Lagrange points are becoming increasingly popular amongst spacecraft mission planners. Lagrange points are locations in space where the gravity force from two bodies, and the centrifugal force acting on a third body, cancel. To date, all spacecraft that have visited a Lagrange point have done so using high-thrust, chemical propulsion. Due to the increasing availability of low-thrust (high efficiency) propulsive devices, and their increasing capability in terms of fuel efficiency and instantaneous thrust, it has now become possible for a spacecraft to reach a Lagrange point orbit without the aid of chemical propellant. While at any given time there are many paths for a low-thrust trajectory to take, only one is optimal. The traditional approach to spacecraft trajectory optimization utilizes some form of gradient-based algorithm. While these algorithms offer numerous advantages, they also have a few significant shortcomings. The three most significant shortcomings are: (1) the fact that an initial guess solution is required to initialize the algorithm, (2) the radius of convergence can be quite small and can allow the algorithm to become trapped in local minima, and (3) gradient information is not always assessable nor always trustworthy for a given problem. To avoid these problems, this dissertation is focused on optimizing a low-thrust transfer trajectory from a geocentric orbit to an Earth-Moon, L1, Lagrange point orbit using the method of Particle Swarm Optimization (PSO). The PSO method is an evolutionary heuristic that was originally written to model birds swarming to locate hidden food sources. This PSO method will enable the exploration of the invariant stable manifold of the target Lagrange point orbit in an effort to optimize the spacecraft's low-thrust trajectory. Examples of these optimized trajectories are presented and contrasted with those found using traditional, gradient-based approaches. In summary, the results of this dissertation find that the PSO method does, indeed, successfully optimize the low-thrust trajectory transfer problem without the need for initial guessing. Furthermore, a two-degree-of-freedom PSO problem formulation significantly outperformed a one-degree-of-freedom formulation by at least an order of magnitude, in terms of CPU time. Finally, the PSO method is also used to solve a traditional, two-burn, impulsive transfer to a Lagrange point orbit using a hybrid optimization algorithm that incorporates a gradient-based shooting algorithm as a pre-optimizer. Surprisingly, the results of this study show that "fast" transfers outperform "slow" transfers in terms of both Deltav and time of flight.
Investigation of Calcium Sulfate’s Contribution to Chemical Off Flavor in Baked Items
2013-09-30
including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed , and completing and...studies if any calcium additive is needed . If shelf life and texture are not adversely effected it may prove to be a cost savings to eliminate...point Quality scale to assess the overall aroma and flavor quality. The 9-point Quality scale is based on the Hedonic scale developed by David Peryam and
NASA Astrophysics Data System (ADS)
Nada, Mohamed Y.; Othman, Mohamed A. K.; Capolino, Filippo
2017-11-01
We present an approach and a theoretical framework for generating high-order exceptional points of degeneracy (EPDs) in photonic structures based on periodic coupled resonator optical waveguides (CROWs). Such EPDs involve the coalescence of Floquet-Bloch eigenwaves in CROWs, without the presence of gain and loss, which contrasts with the parity-time symmetry required to develop exceptional points based on gain and loss balance. The EPDs arise here by introducing symmetry breaking in a conventional chain of coupled resonators through periodic coupling to an adjacent uniform optical waveguide, which leads to unique modal characteristics that cannot be realized in conventional CROWs. Such remarkable characteristics include high quality factors (Q factors) and strong field enhancement, even without any mirrors at the two ends of a cavity. We show for the first time the capability of CROWs to exhibit EPDs of various orders, including the degenerate band edge (DBE) and the stationary inflection point. The proposed CROW of finite length shows an enhanced quality factor when operating near the DBE, and the Q factor exhibits an unconventional scaling with the CROW's length. We develop the theory of EPDs in such unconventional CROW using coupled-wave equations, and we derive an analytical expression for the dispersion relation. The proposed unconventional CROW concepts have various potential applications including Q switching, nonlinear devices, lasers, and extremely sensitive sensors.
NASA Technical Reports Server (NTRS)
Pedretti, Kevin T.; Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)
1997-01-01
A variety of different network technologies and topologies are currently being evaluated as part of the Whitney Project. This paper reports on the implementation and performance of a Fast Ethernet network configured in a 4x4 2D torus topology in a testbed cluster of 'commodity' Pentium Pro PCs. Several benchmarks were used for performance evaluation: an MPI point to point message passing benchmark, an MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2). Our results show that for point to point communication on an unloaded network, the hub and 1 hop routes on the torus have about the same bandwidth and latency. However, the bandwidth decreases and the latency increases on the torus for each additional route hop. Collective communication benchmarks show that the torus provides roughly four times more aggregate bandwidth and eight times faster MPI barrier synchronizations than a hub based network for 16 processor systems. Finally, the SOAPBOX benchmarks, which simulate real-world CFD applications, generally demonstrated substantially better performance on the torus than on the hub. In the few cases the hub was faster, the difference was negligible. In total, our experimental results lead to the conclusion that for Fast Ethernet networks, the torus topology has better performance and scales better than a hub based network.
Tong, Yindong; Bu, Xiaoge; Chen, Junyue; Zhou, Feng; Chen, Long; Liu, Maodian; Tan, Xin; Yu, Tao; Zhang, Wei; Mi, Zhaorong; Ma, Lekuan; Wang, Xuejun; Ni, Jing
2017-01-05
Based on a time-series dataset and the mass balance method, the contributions of various sources to the nutrient discharges from the Yangtze River to the East China Sea are identified. The results indicate that the nutrient concentrations vary considerably among different sections of the Yangtze River. Non-point sources are an important source of nutrients to the Yangtze River, contributing about 36% and 63% of the nitrogen and phosphorus discharged into the East China Sea, respectively. Nutrient inputs from non-point sources vary among the sections of the Yangtze River, and the contributions of non-point sources increase from upstream to downstream. Considering the rice growing patterns in the Yangtze River Basin, the synchrony of rice tillering and the wet seasons might be an important cause of the high nutrient discharge from the non-point sources. Based on our calculations, a reduction of 0.99Tg per year in total nitrogen discharges from the Yangtze River would be needed to limit the occurrences of harmful algal blooms in the East China Sea to 15 times per year. The extensive construction of sewage treatment plants in urban areas may have only a limited effect on reducing the occurrences of harmful algal blooms in the future. Copyright © 2016 Elsevier B.V. All rights reserved.
A deeper look at the X-ray point source population of NGC 4472
NASA Astrophysics Data System (ADS)
Joseph, T. D.; Maccarone, T. J.; Kraft, R. P.; Sivakoff, G. R.
2017-10-01
In this paper we discuss the X-ray point source population of NGC 4472, an elliptical galaxy in the Virgo cluster. We used recent deep Chandra data combined with archival Chandra data to obtain a 380 ks exposure time. We find 238 X-ray point sources within 3.7 arcmin of the galaxy centre, with a completeness flux, FX, 0.5-2 keV = 6.3 × 10-16 erg s-1 cm-2. Most of these sources are expected to be low-mass X-ray binaries. We finding that, using data from a single galaxy which is both complete and has a large number of objects (˜100) below 1038 erg s-1, the X-ray luminosity function is well fitted with a single power-law model. By cross matching our X-ray data with both space based and ground based optical data for NGC 4472, we find that 80 of the 238 sources are in globular clusters. We compare the red and blue globular cluster subpopulations and find red clusters are nearly six times more likely to host an X-ray source than blue clusters. We show that there is evidence that these two subpopulations have significantly different X-ray luminosity distributions. Source catalogues for all X-ray point sources, as well as any corresponding optical data for globular cluster sources, are also presented here.
Crossfit analysis: a novel method to characterize the dynamics of induced plant responses.
Jansen, Jeroen J; van Dam, Nicole M; Hoefsloot, Huub C J; Smilde, Age K
2009-12-16
Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples.
Crossfit analysis: a novel method to characterize the dynamics of induced plant responses
2009-01-01
Background Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. Results This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Conclusions Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples. PMID:20015363
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sasai, Ryo, E-mail: rsasai@riko.shimane-u.ac.jp; Shinomura, Hisashi
Lead bromide-based layered perovskite powders with azobenzene derivatives were prepared by a homogeneous precipitation method. From the diffuse reflectance (DR) and photoluminescence (PL) spectra of the hybrid powder materials, the present hybrids exhibited sharp absorption and PL peaks originating from excitons produced in the PbBr{sub 4}{sup 2-} layer. When the present hybrid powder was irradiated with UV light at 350 nm, the absorption band from the trans-azobenzene chromophore, observed around 350 nm, decreased, while the absorption band from the cis-azobenzene chromophore, observed around 450 nm, increased. These results indicate that azobenzene chromophores in the present hybrid materials exhibit reversible photoisomerization.more » Moreover, it was found that the PL intensity from the exciton also varied due to photoisomerization of the azobenzene chromophores in the present hybrid. Thus, for the first time we succeeded in preparing the azobenzene derivative lead-bromide-based layered perovskite with photochromism before and after UV light irradiation. - Graphical abstract: For the first time, we succeeded in preparing the azobenzene derivative lead-bromide-based layered perovskite with photochromism before and after UV light irradiation. Highlights: Black-Right-Pointing-Pointer PbBr-based layered perovskite with azobenezene derivatives could be synthesized by a homogeneous precipitation method. Black-Right-Pointing-Pointer Azobenzene derivatives incorporated the present hybrid that exhibited reversible photoisomerization under UV and/or visible light irradiation. Black-Right-Pointing-Pointer PL property of the present hybrid could also be varied by photoisomerization.« less
Psychophysics of time perception and intertemporal choice models
NASA Astrophysics Data System (ADS)
Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.
2008-03-01
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.