A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.
Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang
2017-01-01
Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.
Change Point Detection in Correlation Networks
NASA Astrophysics Data System (ADS)
Barnett, Ian; Onnela, Jukka-Pekka
2016-01-01
Many systems of interacting elements can be conceptualized as networks, where network nodes represent the elements and network ties represent interactions between the elements. In systems where the underlying network evolves, it is useful to determine the points in time where the network structure changes significantly as these may correspond to functional change points. We propose a method for detecting change points in correlation networks that, unlike previous change point detection methods designed for time series data, requires minimal distributional assumptions. We investigate the difficulty of change point detection near the boundaries of the time series in correlation networks and study the power of our method and competing methods through simulation. We also show the generalizable nature of the method by applying it to stock price data as well as fMRI data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
2016-06-15
Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Cheung, Yam; Sawant, Amit
2016-05-15
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-01-01
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347
A wavefront orientation method for precise numerical determination of tsunami travel time
NASA Astrophysics Data System (ADS)
Fine, I. V.; Thomson, R. E.
2013-04-01
We present a highly accurate and computationally efficient method (herein, the "wavefront orientation method") for determining the travel time of oceanic tsunamis. Based on Huygens principle, the method uses an eight-point grid-point pattern and the most recent information on the orientation of the advancing wave front to determine the time for a tsunami to travel to a specific oceanic location. The method is shown to provide improved accuracy and reduced anisotropy compared with the conventional multiple grid-point method presently in widespread use.
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-05-01
To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.
Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease
Jie, Biao; Liu, Mingxia; Liu, Jun
2016-01-01
Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313
Optical control of multi-stage thin film solar cell production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jian; Levi, Dean H.; Contreras, Miguel A.
2016-05-17
Embodiments include methods of depositing and controlling the deposition of a film in multiple stages. The disclosed deposition and deposition control methods include the optical monitoring of a deposition matrix to determine a time when at least one transition point occurs. In certain embodiments, the transition point or transition points are a stoichiometry point. Methods may also include controlling the length of time in which material is deposited during a deposition stage or controlling the amount of the first, second or subsequent materials deposited during any deposition stage in response to a determination of the time when a selected transitionmore » point occurs.« less
NASA Astrophysics Data System (ADS)
Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin
2018-03-01
Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing
Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Lam, William H. K.; Li, Qingquan
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej
2015-01-01
The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.
Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan
The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.
A pseudospectral Legendre method for hyperbolic equations with an improved stability condition
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1986-01-01
A new pseudospectral method is introduced for solving hyperbolic partial differential equations. This method uses different grid points than previously used pseudospectral methods: in fact the grid points are related to the zeroes of the Legendre polynomials. The main advantage of this method is that the allowable time step is proportional to the inverse of the number of grid points 1/N rather than to 1/n(2) (as in the case of other pseudospectral methods applied to mixed initial boundary value problems). A highly accurate time discretization suitable for these spectral methods is discussed.
A novel method for vaginal cylinder treatment planning: a seamless transition to 3D brachytherapy
Wu, Vincent; Wang, Zhou; Patil, Sachin
2012-01-01
Purpose Standard treatment plan libraries are often used to ensure a quick turn-around time for vaginal cylinder treatments. Recently there is increasing interest in transitioning from conventional 2D radiograph based brachytherapy to 3D image based brachytherapy, which has resulted in a substantial increase in treatment planning time and decrease in patient through-put. We describe a novel technique that significantly reduces the treatment planning time for CT-based vaginal cylinder brachytherapy. Material and methods Oncentra MasterPlan TPS allows multiple sets of data points to be classified as applicator points which has been harnessed in this method. The method relies on two hard anchor points: the first dwell position in a catheter and an applicator configuration specific dwell position as the plan origin and a soft anchor point beyond the last active dwell position to define the axis of the catheter. The spatial location of various data points on the applicator's surface and at 5 mm depth are stored in an Excel file that can easily be transferred into a patient CT data set using window operations and then used for treatment planning. The remainder of the treatment planning process remains unaffected. Results The treatment plans generated on the Oncentra MasterPlan TPS using this novel method yielded results comparable to those generated on the Plato TPS using a standard treatment plan library in terms of treatment times, dwell weights and dwell times for a given optimization method and normalization points. Less than 2% difference was noticed between the treatment times generated between both systems. Using the above method, the entire planning process, including CT importing, catheter reconstruction, multiple data point definition, optimization and dose prescription, can be completed in ~5–10 minutes. Conclusion The proposed method allows a smooth and efficient transition to 3D CT based vaginal cylinder brachytherapy planning. PMID:23349650
NASA Technical Reports Server (NTRS)
Zhou, Wei
1993-01-01
In the high accurate measurement of periodic signals, the greatest common factor frequency and its characteristics have special functions. A method of time difference measurement - the time difference method by dual 'phase coincidence points' detection is described. This method utilizes the characteristics of the greatest common factor frequency to measure time or phase difference between periodic signals. It can suit a very wide frequency range. Measurement precision and potential accuracy of several picoseconds were demonstrated with this new method. The instrument based on this method is very simple, and the demand for the common oscillator is low. This method and instrument can be used widely.
Accurate pointing of tungsten welding electrodes
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1971-01-01
Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.
Determining postural stability
NASA Technical Reports Server (NTRS)
Forth, Katharine E. (Inventor); Paloski, William H. (Inventor); Lieberman, Erez (Inventor)
2011-01-01
A method for determining postural stability of a person can include acquiring a plurality of pressure data points over a period of time from at least one pressure sensor. The method can also include the step of identifying a postural state for each pressure data point to generate a plurality of postural states. The method can include the step of determining a postural state of the person at a point in time based on at least the plurality of postural states.
Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher
2012-01-01
Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.
An accelerated hologram calculation using the wavefront recording plane method and wavelet transform
NASA Astrophysics Data System (ADS)
Arai, Daisuke; Shimobaba, Tomoyoshi; Nishitsuji, Takashi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi
2017-06-01
Fast hologram calculation methods are critical in real-time holography applications such as three-dimensional (3D) displays. We recently proposed a wavelet transform-based hologram calculation called WASABI. Even though WASABI can decrease the calculation time of a hologram from a point cloud, it increases the calculation time with increasing propagation distance. We also proposed a wavefront recoding plane (WRP) method. This is a two-step fast hologram calculation in which the first step calculates the superposition of light waves emitted from a point cloud in a virtual plane, and the second step performs a diffraction calculation from the virtual plane to the hologram plane. A drawback of the WRP method is in the first step when the point cloud has a large number of object points and/or a long distribution in the depth direction. In this paper, we propose a method combining WASABI and the WRP method in which the drawbacks of each can be complementarily solved. Using a consumer CPU, the proposed method succeeded in performing a hologram calculation with 2048 × 2048 pixels from a 3D object with one million points in approximately 0.4 s.
A General Method for Solving Systems of Non-Linear Equations
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)
1995-01-01
The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.
A travel time forecasting model based on change-point detection method
NASA Astrophysics Data System (ADS)
LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei
2017-06-01
Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Bifurcated method and apparatus for floating point addition with decreased latency time
Farmwald, Paul M.
1987-01-01
Apparatus for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.
Cross-correlation of point series using a new method
NASA Technical Reports Server (NTRS)
Strothers, Richard B.
1994-01-01
Traditional methods of cross-correlation of two time series do not apply to point time series. Here, a new method, devised specifically for point series, utilizes a correlation measure that is based in the rms difference (or, alternatively, the median absolute difference) between nearest neightbors in overlapped segments of the two series. Error estimates for the observed locations of the points, as well as a systematic shift of one series with respect to the other to accommodate a constant, but unknown, lead or lag, are easily incorporated into the analysis using Monte Carlo techniques. A methodological restriction adopted here is that one series be treated as a template series against which the other, called the target series, is cross-correlated. To estimate a significance level for the correlation measure, the adopted alternative (null) hypothesis is that the target series arises from a homogeneous Poisson process. The new method is applied to cross-correlating the times of the greatest geomagnetic storms with the times of maximum in the undecennial solar activity cycle.
Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2016-06-01
We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.
Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V
2003-12-15
Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
Precise determination of time to reach viral load set point after acute HIV-1 infection.
Huang, Xiaojie; Chen, Hui; Li, Wei; Li, Haiying; Jin, Xia; Perelson, Alan S; Fox, Zoe; Zhang, Tong; Xu, Xiaoning; Wu, Hao
2012-12-01
The HIV viral load set point has long been used as a prognostic marker of disease progression and more recently as an end-point parameter in HIV vaccine clinical trials. The definition of set point, however, is variable. Moreover, the earliest time at which the set point is reached after the onset of infection has never been clearly defined. In this study, we obtained sequential plasma viral load data from 60 acutely HIV-infected Chinese patients among a cohort of men who have sex with men, mathematically determined viral load set point levels, and estimated time to attain set point after infection. We also compared the results derived from our models and that obtained from an empirical method. With novel uncomplicated mathematic model, we discovered that set points may vary from 21 to 119 days dependent on the patients' initial viral load trajectory. The viral load set points were 4.28 ± 0.86 and 4.25 ± 0.87 log10 copies per milliliter (P = 0.08), respectively, as determined by our model and an empirical method, suggesting an excellent agreement between the old and new methods. We provide a novel method to estimate viral load set point at the very early stage of HIV infection. Application of this model can accurately and reliably determine the set point, thus providing a new tool for physicians to better monitor early intervention strategies in acutely infected patients and scientists to rationally design preventative vaccine studies.
A point implicit time integration technique for slow transient flow problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.
2015-05-01
We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less
A Fourier collocation time domain method for numerically solving Maxwell's equations
NASA Technical Reports Server (NTRS)
Shebalin, John V.
1991-01-01
A new method for solving Maxwell's equations in the time domain for arbitrary values of permittivity, conductivity, and permeability is presented. Spatial derivatives are found by a Fourier transform method and time integration is performed using a second order, semi-implicit procedure. Electric and magnetic fields are collocated on the same grid points, rather than on interleaved points, as in the Finite Difference Time Domain (FDTD) method. Numerical results are presented for the propagation of a 2-D Transverse Electromagnetic (TEM) mode out of a parallel plate waveguide and into a dielectric and conducting medium.
Simultaneous Detection and Tracking of Pedestrian from Panoramic Laser Scanning Data
NASA Astrophysics Data System (ADS)
Xiao, Wen; Vallet, Bruno; Schindler, Konrad; Paparoditis, Nicolas
2016-06-01
Pedestrian traffic flow estimation is essential for public place design and construction planning. Traditional data collection by human investigation is tedious, inefficient and expensive. Panoramic laser scanners, e.g. Velodyne HDL-64E, which scan surroundings repetitively at a high frequency, have been increasingly used for 3D object tracking. In this paper, a simultaneous detection and tracking (SDAT) method is proposed for precise and automatic pedestrian trajectory recovery. First, the dynamic environment is detected using two different methods, Nearest-point and Max-distance. Then, all the points on moving objects are transferred into a space-time (x, y, t) coordinate system. The pedestrian detection and tracking amounts to assign the points belonging to pedestrians into continuous trajectories in space-time. We formulate the point assignment task as an energy function which incorporates the point evidence, trajectory number, pedestrian shape and motion. A low energy trajectory will well explain the point observations, and have plausible trajectory trend and length. The method inherently filters out points from other moving objects and false detections. The energy function is solved by a two-step optimization process: tracklet detection in a short temporal window; and global tracklet association through the whole time span. Results demonstrate that the proposed method can automatically recover the pedestrians trajectories with accurate positions and low false detections and mismatches.
NMR diffusion simulation based on conditional random walk.
Gudbjartsson, H; Patz, S
1995-01-01
The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.
He, Feng; Zeng, An-Ping
2006-01-01
Background The increasing availability of time-series expression data opens up new possibilities to study functional linkages of genes. Present methods used to infer functional linkages between genes from expression data are mainly based on a point-to-point comparison. Change trends between consecutive time points in time-series data have been so far not well explored. Results In this work we present a new method based on extracting main features of the change trend and level of gene expression between consecutive time points. The method, termed as trend correlation (TC), includes two major steps: 1, calculating a maximal local alignment of change trend score by dynamic programming and a change trend correlation coefficient between the maximal matched change levels of each gene pair; 2, inferring relationships of gene pairs based on two statistical extraction procedures. The new method considers time shifts and inverted relationships in a similar way as the local clustering (LC) method but the latter is merely based on a point-to-point comparison. The TC method is demonstrated with data from yeast cell cycle and compared with the LC method and the widely used Pearson correlation coefficient (PCC) based clustering method. The biological significance of the gene pairs is examined with several large-scale yeast databases. Although the TC method predicts an overall lower number of gene pairs than the other two methods at a same p-value threshold, the additional number of gene pairs inferred by the TC method is considerable: e.g. 20.5% compared with the LC method and 49.6% with the PCC method for a p-value threshold of 2.7E-3. Moreover, the percentage of the inferred gene pairs consistent with databases by our method is generally higher than the LC method and similar to the PCC method. A significant number of the gene pairs only inferred by the TC method are process-identity or function-similarity pairs or have well-documented biological interactions, including 443 known protein interactions and some known cell cycle related regulatory interactions. It should be emphasized that the overlapping of gene pairs detected by the three methods is normally not very high, indicating a necessity of combining the different methods in search of functional association of genes from time-series data. For a p-value threshold of 1E-5 the percentage of process-identity and function-similarity gene pairs among the shared part of the three methods reaches 60.2% and 55.6% respectively, building a good basis for further experimental and functional study. Furthermore, the combined use of methods is important to infer more complete regulatory circuits and network as exemplified in this study. Conclusion The TC method can significantly augment the current major methods to infer functional linkages and biological network and is well suitable for exploring temporal relationships of gene expression in time-series data. PMID:16478547
Experimental results for the rapid determination of the freezing point of fuels
NASA Technical Reports Server (NTRS)
Mathiprakasam, B.
1984-01-01
Two methods for the rapid determination of the freezing point of fuels were investigated: an optical method, which detected the change in light transmission from the disappearance of solid particles in the melted fuel; and a differential thermal analysis (DTA) method, which sensed the latent heat of fusion. A laboratory apparatus was fabricated to test the two methods. Cooling was done by thermoelectric modules using an ice-water bath as a heat sink. The DTA method was later modified to eliminate the reference fuel. The data from the sample were digitized and a point of inflection, which corresponds to the ASTM D-2386 freezing point (final melting point), was identified from the derivative. The apparatus was modifified to cool the fuel to -60 C and controls were added for maintaining constant cooling rate, rewarming rate, and hold time at minimum temperature. A parametric series of tests were run for twelve fuels with freezing points from -10 C to -50 C, varying cooling rate, rewarming rate, and hold time. Based on the results, an optimum test procedure was established. The results showed good agreement with ASTM D-2386 freezing point and differential scanning calorimetry results.
Healy, R.W.; Russell, T.F.
1992-01-01
A finite-volume Eulerian-Lagrangian local adjoint method for solution of the advection-dispersion equation is developed and discussed. The method is mass conservative and can solve advection-dominated ground-water solute-transport problems accurately and efficiently. An integrated finite-difference approach is used in the method. A key component of the method is that the integral representing the mass-storage term is evaluated numerically at the current time level. Integration points, and the mass associated with these points, are then forward tracked up to the next time level. The number of integration points required to reach a specified level of accuracy is problem dependent and increases as the sharpness of the simulated solute front increases. Integration points are generally equally spaced within each grid cell. For problems involving variable coefficients it has been found to be advantageous to include additional integration points at strategic locations in each well. These locations are determined by backtracking. Forward tracking of boundary fluxes by the method alleviates problems that are encountered in the backtracking approaches of most characteristic methods. A test problem is used to illustrate that the new method offers substantial advantages over other numerical methods for a wide range of problems.
A Novel Real-Time Reference Key Frame Scan Matching Method.
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-05-07
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
NASA Astrophysics Data System (ADS)
Klaas, Dua K. S. Y.; Imteaz, Monzur Alam
2017-09-01
A robust configuration of pilot points in the parameterisation step of a model is crucial to accurately obtain a satisfactory model performance. However, the recommendations provided by the majority of recent researchers on pilot-point use are considered somewhat impractical. In this study, a practical approach is proposed for using pilot-point properties (i.e. number, distance and distribution method) in the calibration step of a groundwater model. For the first time, the relative distance-area ratio ( d/ A) and head-zonation-based (HZB) method are introduced, to assign pilot points into the model domain by incorporating a user-friendly zone ratio. This study provides some insights into the trade-off between maximising and restricting the number of pilot points, and offers a relative basis for selecting the pilot-point properties and distribution method in the development of a physically based groundwater model. The grid-based (GB) method is found to perform comparably better than the HZB method in terms of model performance and computational time. When using the GB method, this study recommends a distance-area ratio of 0.05, a distance-x-grid length ratio ( d/ X grid) of 0.10, and a distance-y-grid length ratio ( d/ Y grid) of 0.20.
A pseudospectral Legendre method for hyperbolic equations with an improved stability condition
NASA Technical Reports Server (NTRS)
Tal-Ezer, H.
1984-01-01
A new pseudospectral method is introduced for solving hyperbolic partial differential equations. This method uses different grid points than previously used pseudospectral methods: in fact the grid are related to the zeroes of the Legendre polynomials. The main advantage of this method is that the allowable time step is proportional to the inverse of the number of grid points 1/N rather than to 1/n(2) (as in the case of other pseudospectral methods applied to mixed initial boundary value problems). A highly accurate time discretization suitable for these spectral methods is discussed.
A time-efficient algorithm for implementing the Catmull-Clark subdivision method
NASA Astrophysics Data System (ADS)
Ioannou, G.; Savva, A.; Stylianou, V.
2015-10-01
Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.
Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang
2018-04-05
We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.
Dynamic analysis of suspension cable based on vector form intrinsic finite element method
NASA Astrophysics Data System (ADS)
Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun
2017-10-01
A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.
NASA Astrophysics Data System (ADS)
Syusina, O. M.; Chernitsov, A. M.; Tamarov, V. A.
2011-07-01
The combined method for mapping of the confidence region of the motion of the small bodies of Solar system at any point in time is proposed. In this method firstly one carries out linear mapping of the initial region at given point in time. If nonlinear-ity coefficient of evaluated region provided to be larger of permissible ones, that one carries out linear mapping of the initial region at limit moment for which this region be ellipsoidal. After that obtained region is mapping at given point in time by nonlinear method.
A Novel Real-Time Reference Key Frame Scan Matching Method
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-01-01
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285
Liu, Wanli
2017-03-08
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.
A method to approximate a closest loadability limit using multiple load flow solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yorino, Naoto; Harada, Shigemi; Cheng, Haozhong
A new method is proposed to approximate a closest loadability limit (CLL), or closest saddle node bifurcation point, using a pair of multiple load flow solutions. More strictly, the obtainable points by the method are the stationary points including not only CLL but also farthest and saddle points. An operating solution and a low voltage load flow solution are used to efficiently estimate the node injections at a CLL as well as the left and right eigenvectors corresponding to the zero eigenvalue of the load flow Jacobian. They can be used in monitoring loadability margin, in identification of weak spotsmore » in a power system and in the examination of an optimal control against voltage collapse. Most of the computation time of the proposed method is taken in calculating the load flow solution pair. The remaining computation time is less than that of an ordinary load flow.« less
Moving Force Identification: a Time Domain Method
NASA Astrophysics Data System (ADS)
Law, S. S.; Chan, T. H. T.; Zeng, Q. H.
1997-03-01
The solution for the vertical dynamic interaction forces between a moving vehicle and the bridge deck is analytically derived and experimentally verified. The deck is modelled as a simply supported beam with viscous damping, and the vehicle/bridge interaction force is modelled as one-point or two-point loads with fixed axle spacing, moving at constant speed. The method is based on modal superposition and is developed to identify the forces in the time domain. Both cases of one-point and two-point forces moving on a simply supported beam are simulated. Results of laboratory tests on the identification of the vehicle/bridge interaction forces are presented. Computation simulations and laboratory tests show that the method is effective, and acceptable results can be obtained by combining the use of bending moment and acceleration measurements.
NASA Astrophysics Data System (ADS)
Verfaillie, Deborah; Déqué, Michel; Morin, Samuel; Lafaysse, Matthieu
2017-11-01
We introduce the method ADAMONT v1.0 to adjust and disaggregate daily climate projections from a regional climate model (RCM) using an observational dataset at hourly time resolution. The method uses a refined quantile mapping approach for statistical adjustment and an analogous method for sub-daily disaggregation. The method ultimately produces adjusted hourly time series of temperature, precipitation, wind speed, humidity, and short- and longwave radiation, which can in turn be used to force any energy balance land surface model. While the method is generic and can be employed for any appropriate observation time series, here we focus on the description and evaluation of the method in the French mountainous regions. The observational dataset used here is the SAFRAN meteorological reanalysis, which covers the entire French Alps split into 23 massifs, within which meteorological conditions are provided for several 300 m elevation bands. In order to evaluate the skills of the method itself, it is applied to the ALADIN-Climate v5 RCM using the ERA-Interim reanalysis as boundary conditions, for the time period from 1980 to 2010. Results of the ADAMONT method are compared to the SAFRAN reanalysis itself. Various evaluation criteria are used for temperature and precipitation but also snow depth, which is computed by the SURFEX/ISBA-Crocus model using the meteorological driving data from either the adjusted RCM data or the SAFRAN reanalysis itself. The evaluation addresses in particular the time transferability of the method (using various learning/application time periods), the impact of the RCM grid point selection procedure for each massif/altitude band configuration, and the intervariable consistency of the adjusted meteorological data generated by the method. Results show that the performance of the method is satisfactory, with similar or even better evaluation metrics than alternative methods. However, results for air temperature are generally better than for precipitation. Results in terms of snow depth are satisfactory, which can be viewed as indicating a reasonably good intervariable consistency of the meteorological data produced by the method. In terms of temporal transferability (evaluated over time periods of 15 years only), results depend on the learning period. In terms of RCM grid point selection technique, the use of a complex RCM grid points selection technique, taking into account horizontal but also altitudinal proximity to SAFRAN massif centre points/altitude couples, generally degrades evaluation metrics for high altitudes compared to a simpler grid point selection method based on horizontal distance.
Time-reversal transcranial ultrasound beam focusing using a k-space method
Jing, Yun; Meral, F. Can; Clement, Greg. T.
2012-01-01
This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477
Deep-Focusing Time-Distance Helioseismology
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.; Jensen, J. M.; Kosovichev, A. G.; Birch, A. C.; Fisher, Richard R. (Technical Monitor)
2001-01-01
Much progress has been made by measuring the travel times of solar acoustic waves from a central surface location to points at equal arc distance away. Depth information is obtained from the range of arc distances examined, with the larger distances revealing the deeper layers. This method we will call surface-focusing, as the common point, or focus, is at the surface. To obtain a clearer picture of the subsurface region, it would, no doubt, be better to focus on points below the surface. Our first attempt to do this used the ray theory to pick surface location pairs that would focus on a particular subsurface point. This is not the ideal procedure, as Born approximation kernels suggest that this focus should have zero sensitivity to sound speed inhomogeneities. However, the sensitivity is concentrated below the surface in a much better way than the old surface-focusing method, and so we expect the deep-focusing method to be more sensitive. A large sunspot group was studied by both methods. Inversions based on both methods will be compared.
Normalization methods in time series of platelet function assays
Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham
2016-01-01
Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217
Concept for an off-line gain stabilisation method.
Pommé, S; Sibbens, G
2004-01-01
Conceptual ideas are presented for an off-line gain stabilisation method for spectrometry, in particular for alpha-particle spectrometry at low count rate. The method involves list mode storage of individual energy and time stamp data pairs. The 'Stieltjes integral' of measured spectra with respect to a reference spectrum is proposed as an indicator for gain instability. 'Exponentially moving averages' of the latter show the gain shift as a function of time. With this information, the data are relocated stochastically on a point-by-point basis.
Estimating the number of people in crowded scenes
NASA Astrophysics Data System (ADS)
Kim, Minjin; Kim, Wonjun; Kim, Changick
2011-01-01
This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.
Development of a point-kinetic verification scheme for nuclear reactor applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demazière, C., E-mail: demaz@chalmers.se; Dykin, V.; Jareteg, K.
In this paper, a new method that can be used for checking the proper implementation of time- or frequency-dependent neutron transport models and for verifying their ability to recover some basic reactor physics properties is proposed. This method makes use of the application of a stationary perturbation to the system at a given frequency and extraction of the point-kinetic component of the system response. Even for strongly heterogeneous systems for which an analytical solution does not exist, the point-kinetic component follows, as a function of frequency, a simple analytical form. The comparison between the extracted point-kinetic component and its expectedmore » analytical form provides an opportunity to verify and validate neutron transport solvers. The proposed method is tested on two diffusion-based codes, one working in the time domain and the other working in the frequency domain. As long as the applied perturbation has a non-zero reactivity effect, it is demonstrated that the method can be successfully applied to verify and validate time- or frequency-dependent neutron transport solvers. Although the method is demonstrated in the present paper in a diffusion theory framework, higher order neutron transport methods could be verified based on the same principles.« less
NASA Technical Reports Server (NTRS)
Gupta, R. N.; Moss, J. N.; Simmonds, A. L.
1982-01-01
Two flow-field codes employing the time- and space-marching numerical techniques were evaluated. Both methods were used to analyze the flow field around a massively blown Jupiter entry probe under perfect-gas conditions. In order to obtain a direct point-by-point comparison, the computations were made by using identical grids and turbulence models. For the same degree of accuracy, the space-marching scheme takes much less time as compared to the time-marching method and would appear to provide accurate results for the problems with nonequilibrium chemistry, free from the effect of local differences in time on the final solution which is inherent in time-marching methods. With the time-marching method, however, the solutions are obtainable for the realistic entry probe shapes with massive or uniform surface blowing rates; whereas, with the space-marching technique, it is difficult to obtain converged solutions for such flow conditions. The choice of the numerical method is, therefore, problem dependent. Both methods give equally good results for the cases where results are compared with experimental data.
NASA Astrophysics Data System (ADS)
Nakatsuji, Noriaki; Matsushima, Kyoji
2017-03-01
Full-parallax high-definition CGHs composed of more than billion pixels were so far created only by the polygon-based method because of its high performance. However, GPUs recently allow us to generate CGHs much faster by the point cloud. In this paper, we measure computation time of object fields for full-parallax high-definition CGHs, which are composed of 4 billion pixels and reconstruct the same scene, by using the point cloud with GPU and the polygon-based method with CPU. In addition, we compare the optical and simulated reconstructions between CGHs created by these techniques to verify the image quality.
A new method for mapping multidimensional data to lower dimensions
NASA Technical Reports Server (NTRS)
Gowda, K. C.
1983-01-01
A multispectral mapping method is proposed which is based on the new concept of BEND (Bidimensional Effective Normalised Difference). The method, which involves taking one sample point at a time and finding the interrelationships between its features, is found very economical from the point of view of storage and processing time. It has good dimensionality reduction and clustering properties, and is highly suitable for computer analysis of large amounts of data. The transformed values obtained by this procedure are suitable for either a planar 2-space mapping of geological sample points or for making grayscale and color images of geo-terrains. A few examples are given to justify the efficacy of the proposed procedure.
The fast multipole method and point dipole moment polarizable force fields.
Coles, Jonathan P; Masella, Michel
2015-01-14
We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.
[Proposal of a costing method for the provision of sterilization in a public hospital].
Bauler, S; Combe, C; Piallat, M; Laurencin, C; Hida, H
2011-07-01
To refine the billing to institutions whose operations of sterilization are outsourced, a sterilization cost approach was developed. The aim of the study is to determine the value of a sterilization unit (one point "S") evolving according to investments, quantities processed, types of instrumentation or packaging. The time of preparation has been selected from all sub-processes of sterilization to determine the value of one point S. The time of preparation of sterilized large and small containers and pouches were raised. The reference time corresponds to one bag (equal to one point S). Simultaneously, the annual operating cost of sterilization was defined and divided into several areas of expenditure: employees, equipments and building depreciation, supplies, and maintenance. A total of 136 crossing times of containers were measured. Time to prepare a pouch has been estimated at one minute (one S). A small container represents four S and a large container represents 10S. By dividing the operating cost of sterilization by the total number of points of sterilization over a given period, the cost of one S can be determined. This method differs from traditional costing method in sterilizing services, considering each item of expenditure. This point S will be the base for billing of subcontracts to other institutions. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
The application of prototype point processes for the summary and description of California wildfires
Nichols, K.; Schoenberg, F.P.; Keeley, J.E.; Bray, A.; Diez, D.
2011-01-01
A method for summarizing repeated realizations of a space-time marked point process, known as prototyping, is discussed and applied to catalogues of wildfires in California. Prototype summaries are constructed for varying time intervals using California wildfire data from 1990 to 2006. Previous work on prototypes for temporal and space-time point processes is extended here to include methods for computing prototypes with marks and the incorporation of prototype summaries into hierarchical clustering algorithms, the latter of which is used to delineate fire seasons in California. Other results include summaries of patterns in the spatial-temporal distribution of wildfires within each wildfire season. ?? 2011 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Rocco, Emr; Prado, Afbap; Souza, Mlos
In this work, the problem of bi-impulsive orbital transfers between coplanar elliptical orbits with minimum fuel consumption but with a time limit for this transfer is studied. As a first method, the equations presented by Lawden (1993) were used. Those equations furnishes the optimal transfer orbit with fixed time for this transfer, between two elliptical coplanar orbits considering fixed terminal points. The method was adapted to cases with free terminal points and those equations was solved to develop a software for orbital maneuvers. As a second method, the equations presented by Eckel and Vinh (1984) were used, those equations provide the transfer orbit between non-coplanar elliptical orbits with minimum fuel and fixed time transfer, or minimum time transfer for a prescribed fuel consumption, considering free terminal points. But in this work only the problem with fixed time transfer was considered, the case of minimum time for a prescribed fuel consumption was already studied in Rocco et al. (2000). Then, the method was modified to consider cases of coplanar orbital transfer, and develop a software for orbital maneuvers. Therefore, two software that solve the same problem using different methods were developed. The first method, presented by Lawden, uses the primer vector theory. The second method, presented by Eckel and Vinh, uses the ordinary theory of maxima and minima. So, to test the methods we choose the same terminal orbits and the same time as input. We could verify that we didn't obtain exactly the same result. In this work, that is an extension of Rocco et al. (2002), these differences in the results are explored with objective of determining the reason of the occurrence of these differences and which modifications should be done to eliminate them.
An Improved Method for Real-Time 3D Construction of DTM
NASA Astrophysics Data System (ADS)
Wei, Yi
This paper discusses the real-time optimal construction of DTM by two measures. One is to improve coordinate transformation of discrete points acquired from lidar, after processing a total number of 10000 data points, the formula calculation for transformation costs 0.810s, while the table look-up method for transformation costs 0.188s, indicating that the latter is superior to the former. The other one is to adjust the density of the point cloud acquired from lidar, the certain amount of the data points are used for 3D construction in proper proportion in order to meet different needs for 3D imaging, and ultimately increase efficiency of DTM construction while saving system resources.
Autoregressive-model-based missing value estimation for DNA microarray time series data.
Choong, Miew Keen; Charbit, Maurice; Yan, Hong
2009-01-01
Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.
NASA Astrophysics Data System (ADS)
Brokešová, Johana; Málek, Jiří
2018-07-01
A new method for representing seismograms by using zero-crossing points is described. This method is based on decomposing a seismogram into a set of quasi-harmonic components and, subsequently, on determining the precise zero-crossing times of these components. An analogous approach can be applied to determine extreme points that represent the zero-crossings of the first time derivative of the quasi-harmonics. Such zero-crossing and/or extreme point seismogram representation can be used successfully to reconstruct single-station seismograms, but the main application is to small-aperture array data analysis to which standard methods cannot be applied. The precise times of the zero-crossing and/or extreme points make it possible to determine precise time differences across the array used to retrieve the parameters of a plane wave propagating across the array, namely, its backazimuth and apparent phase velocity along the Earth's surface. The applicability of this method is demonstrated using two synthetic examples. In the real-data example from the Příbram-Háje array in central Bohemia (Czech Republic) for the Mw 6.4 Crete earthquake of October 12, 2013, this method is used to determine the phase velocity dispersion of both Rayleigh and Love waves. The resulting phase velocities are compared with those obtained by employing the seismic plane-wave rotation-to-translation relations. In this approach, the phase velocity is calculated by obtaining the amplitude ratios between the rotation and translation components. Seismic rotations are derived from the array data, for which the small aperture is not only an advantage but also an applicability condition.
Hart, Corey B.; Giszter, Simon F.
2013-01-01
We present and apply a method that uses point process statistics to discriminate the forms of synergies in motor pattern data, prior to explicit synergy extraction. The method uses electromyogram (EMG) pulse peak timing or onset timing. Peak timing is preferable in complex patterns where pulse onsets may be overlapping. An interval statistic derived from the point processes of EMG peak timings distinguishes time-varying synergies from synchronous synergies (SS). Model data shows that the statistic is robust for most conditions. Its application to both frog hindlimb EMG and rat locomotion hindlimb EMG show data from these preparations is clearly most consistent with synchronous synergy models (p < 0.001). Additional direct tests of pulse and interval relations in frog data further bolster the support for synchronous synergy mechanisms in these data. Our method and analyses support separated control of rhythm and pattern of motor primitives, with the low level execution primitives comprising pulsed SS in both frog and rat, and both episodic and rhythmic behaviors. PMID:23675341
THE NEW ENLISTED EVALUATION SYSTEM: DID THE AIR FORCE GET IT CORRECT THIS TIME
2015-10-01
faced a two-fold problem - excessive manning promoted through the war along with an increased use of contractors for maintenance and support. This...concept to standardize board promotion methods .38 A three-judge panel scored each package using a 10-point scale on ½-point increments. If the three...rounding methods were changed to remove inequities. The original methods rounded up or down to the nearest whole percentage point, but this change
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2017-05-09
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
2011-01-01
Background The Prospective Space-Time scan statistic (PST) is widely used for the evaluation of space-time clusters of point event data. Usually a window of cylindrical shape is employed, with a circular or elliptical base in the space domain. Recently, the concept of Minimum Spanning Tree (MST) was applied to specify the set of potential clusters, through the Density-Equalizing Euclidean MST (DEEMST) method, for the detection of arbitrarily shaped clusters. The original map is cartogram transformed, such that the control points are spread uniformly. That method is quite effective, but the cartogram construction is computationally expensive and complicated. Results A fast method for the detection and inference of point data set space-time disease clusters is presented, the Voronoi Based Scan (VBScan). A Voronoi diagram is built for points representing population individuals (cases and controls). The number of Voronoi cells boundaries intercepted by the line segment joining two cases points defines the Voronoi distance between those points. That distance is used to approximate the density of the heterogeneous population and build the Voronoi distance MST linking the cases. The successive removal of edges from the Voronoi distance MST generates sub-trees which are the potential space-time clusters. Finally, those clusters are evaluated through the scan statistic. Monte Carlo replications of the original data are used to evaluate the significance of the clusters. An application for dengue fever in a small Brazilian city is presented. Conclusions The ability to promptly detect space-time clusters of disease outbreaks, when the number of individuals is large, was shown to be feasible, due to the reduced computational load of VBScan. Instead of changing the map, VBScan modifies the metric used to define the distance between cases, without requiring the cartogram construction. Numerical simulations showed that VBScan has higher power of detection, sensitivity and positive predicted value than the Elliptic PST. Furthermore, as VBScan also incorporates topological information from the point neighborhood structure, in addition to the usual geometric information, it is more robust than purely geometric methods such as the elliptic scan. Those advantages were illustrated in a real setting for dengue fever space-time clusters. PMID:21513556
An adaptive segment method for smoothing lidar signal based on noise estimation
NASA Astrophysics Data System (ADS)
Wang, Yuzhao; Luo, Pingping
2014-10-01
An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.
Liu, Wanli
2017-01-01
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated. PMID:28282897
Alternative Attitude Commanding and Control for Precise Spacecraft Landing
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2004-01-01
A report proposes an alternative method of control for precision landing on a remote planet. In the traditional method, the attitude of a spacecraft is required to track a commanded translational acceleration vector, which is generated at each time step by solving a two-point boundary value problem. No requirement of continuity is imposed on the acceleration. The translational acceleration does not necessarily vary smoothly. Tracking of a non-smooth acceleration causes the vehicle attitude to exhibit undesirable transients and poor pointing stability behavior. In the alternative method, the two-point boundary value problem is not solved at each time step. A smooth reference position profile is computed. The profile is recomputed only when the control errors get sufficiently large. The nominal attitude is still required to track the smooth reference acceleration command. A steering logic is proposed that controls the position and velocity errors about the reference profile by perturbing the attitude slightly about the nominal attitude. The overall pointing behavior is therefore smooth, greatly reducing the degree of pointing instability.
Meng, Yu; Li, Gang; Gao, Yaozong; Lin, Weili; Shen, Dinggang
2016-11-01
Longitudinal neuroimaging analysis of the dynamic brain development in infants has received increasing attention recently. Many studies expect a complete longitudinal dataset in order to accurately chart the brain developmental trajectories. However, in practice, a large portion of subjects in longitudinal studies often have missing data at certain time points, due to various reasons such as the absence of scan or poor image quality. To make better use of these incomplete longitudinal data, in this paper, we propose a novel machine learning-based method to estimate the subject-specific, vertex-wise cortical morphological attributes at the missing time points in longitudinal infant studies. Specifically, we develop a customized regression forest, named dynamically assembled regression forest (DARF), as the core regression tool. DARF ensures the spatial smoothness of the estimated maps for vertex-wise cortical morphological attributes and also greatly reduces the computational cost. By employing a pairwise estimation followed by a joint refinement, our method is able to fully exploit the available information from both subjects with complete scans and subjects with missing scans for estimation of the missing cortical attribute maps. The proposed method has been applied to estimating the dynamic cortical thickness maps at missing time points in an incomplete longitudinal infant dataset, which includes 31 healthy infant subjects, each having up to five time points in the first postnatal year. The experimental results indicate that our proposed framework can accurately estimate the subject-specific vertex-wise cortical thickness maps at missing time points, with the average error less than 0.23 mm. Hum Brain Mapp 37:4129-4147, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Performance Evaluation of Real-Time Precise Point Positioning Method
NASA Astrophysics Data System (ADS)
Alcay, Salih; Turgut, Muzeyyen
2017-12-01
Post-Processed Precise Point Positioning (PPP) is a well-known zero-difference positioning method which provides accurate and precise results. After the experimental tests, IGS Real Time Service (RTS) officially provided real time orbit and clock products for the GNSS community that allows real-time (RT) PPP applications. Different software packages can be used for RT-PPP. In this study, in order to evaluate the performance of RT-PPP, 3 IGS stations are used. Results, obtained by using BKG Ntrip Client (BNC) Software v2.12, are examined in terms of both accuracy and precision.
Wu, Yao; Wu, Guorong; Wang, Li; Munsell, Brent C.; Wang, Qian; Lin, Weili; Feng, Qianjin; Chen, Wufan; Shen, Dinggang
2015-01-01
Purpose: To investigate anatomical differences across individual subjects, or longitudinal changes in early brain development, it is important to perform accurate image registration. However, due to fast brain development and dynamic tissue appearance changes, it is very difficult to align infant brain images acquired from birth to 1-yr-old. Methods: To solve this challenging problem, a novel image registration method is proposed to align two infant brain images, regardless of age at acquisition. The main idea is to utilize the growth trajectories, or spatial-temporal correspondences, learned from a set of longitudinal training images, for guiding the registration of two different time-point images with different image appearances. Specifically, in the training stage, an intrinsic growth trajectory is first estimated for each training subject using the longitudinal images. To register two new infant images with potentially a large age gap, the corresponding images patches between each new image and its respective training images with similar age are identified. Finally, the registration between the two new images can be assisted by the learned growth trajectories from one time point to another time point that have been established in the training stage. To further improve registration accuracy, the proposed method is combined with a hierarchical and symmetric registration framework that can iteratively add new key points in both images to steer the estimation of the deformation between the two infant brain images under registration. Results: To evaluate image registration accuracy, the proposed method is used to align 24 infant subjects at five different time points (2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old). Compared to the state-of-the-art methods, the proposed method demonstrated superior registration performance. Conclusions: The proposed method addresses the difficulties in the infant brain registration and produces better results compared to existing state-of-the-art registration methods. PMID:26133617
A shape-based segmentation method for mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen
2013-07-01
Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.
Background/Question/Methods Bacterial pathogens in surface water present disease risks to aquatic communities and for human recreational activities. Sources of these pathogens include runoff from urban, suburban, and agricultural point and non-point sources, but hazardous micr...
[Automated analyzer of enzyme immunoassay].
Osawa, S
1995-09-01
Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.
a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree
NASA Astrophysics Data System (ADS)
Kang, Q.; Huang, G.; Yang, S.
2018-04-01
Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
Real-time global illumination on mobile device
NASA Astrophysics Data System (ADS)
Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.
2014-02-01
We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.
Calculation of power spectrums from digital time series with missing data points
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.
1980-01-01
Two algorithms are developed for calculating power spectrums from the autocorrelation function when there are missing data points in the time series. Both methods use an average sampling interval to compute lagged products. One method, the correlation function power spectrum, takes the discrete Fourier transform of the lagged products directly to obtain the spectrum, while the other, the modified Blackman-Tukey power spectrum, takes the Fourier transform of the mean lagged products. Both techniques require fewer calculations than other procedures since only 50% to 80% of the maximum lags need be calculated. The algorithms are compared with the Fourier transform power spectrum and two least squares procedures (all for an arbitrary data spacing). Examples are given showing recovery of frequency components from simulated periodic data where portions of the time series are missing and random noise has been added to both the time points and to values of the function. In addition the methods are compared using real data. All procedures performed equally well in detecting periodicities in the data.
Robust estimation of pulse wave transit time using group delay.
Meloni, Antonella; Zymeski, Heather; Pepe, Alessia; Lombardi, Massimo; Wood, John C
2014-03-01
To evaluate the efficiency of a novel transit time (Δt) estimation method from cardiovascular magnetic resonance flow curves. Flow curves were estimated from phase contrast images of 30 patients. Our method (TT-GD: transit time group delay) operates in the frequency domain and models the ascending aortic waveform as an input passing through a discrete-component "filter," producing the observed descending aortic waveform. The GD of the filter represents the average time delay (Δt) across individual frequency bands of the input. This method was compared with two previously described time-domain methods: TT-point using the half-maximum of the curves and TT-wave using cross-correlation. High temporal resolution flow images were studied at multiple downsampling rates to study the impact of differences in temporal resolution. Mean Δts obtained with the three methods were comparable. The TT-GD method was the most robust to reduced temporal resolution. While the TT-GD and the TT-wave produced comparable results for velocity and flow waveforms, the TT-point resulted in significant shorter Δts when calculated from velocity waveforms (difference: 1.8±2.7 msec; coefficient of variability: 8.7%). The TT-GD method was the most reproducible, with an intraobserver variability of 3.4% and an interobserver variability of 3.7%. Compared to the traditional TT-point and TT-wave methods, the TT-GD approach was more robust to the choice of temporal resolution, waveform type, and observer. Copyright © 2013 Wiley Periodicals, Inc.
Euthanasia Method for Mice in Rapid Time-Course Pulmonary Pharmacokinetic Studies
Schoell, Adam R; Heyde, Bruce R; Weir, Dana E; Chiang, Po-Chang; Hu, Yiding; Tung, David K
2009-01-01
To develop a means of euthanasia to support rapid time-course pharmacokinetic studies in mice, we compared retroorbital and intravenous lateral tail vein injection of ketamine–xylazine with regard to preparation time, utility, tissue distribution, and time to onset of euthanasia. Tissue distribution and time to onset of euthanasia did not differ between administration methods. However, retroorbital injection could be performed more rapidly than intravenous injection and was considered to be a technically simple and superior alternative for mouse euthanasia. Retroorbital ketamine–xylazine, CO2 gas, and intraperitoneal pentobarbital then were compared as euthanasia agents in a rapid time-point pharmacokinetic study. Retroorbital ketamine–xylazine was the most efficient and consistent of the 3 methods, with an average time to death of approximately 5 s after injection. In addition, euthanasia by retroorbital ketamine–xylazine enabled accurate sample collection at closely spaced time points and satisfied established criteria for acceptable euthanasia technique. PMID:19807971
Euthanasia method for mice in rapid time-course pulmonary pharmacokinetic studies.
Schoell, Adam R; Heyde, Bruce R; Weir, Dana E; Chiang, Po-Chang; Hu, Yiding; Tung, David K
2009-09-01
To develop a means of euthanasia to support rapid time-course pharmacokinetic studies in mice, we compared retroorbital and intravenous lateral tail vein injection of ketamine-xylazine with regard to preparation time, utility, tissue distribution, and time to onset of euthanasia. Tissue distribution and time to onset of euthanasia did not differ between administration methods. However, retroorbital injection could be performed more rapidly than intravenous injection and was considered to be a technically simple and superior alternative for mouse euthanasia. Retroorbital ketamine-xylazine, CO(2) gas, and intraperitoneal pentobarbital then were compared as euthanasia agents in a rapid time-point pharmacokinetic study. Retroorbital ketamine-xylazine was the most efficient and consistent of the 3 methods, with an average time to death of approximately 5 s after injection. In addition, euthanasia by retroorbital ketamine-xylazine enabled accurate sample collection at closely spaced time points and satisfied established criteria for acceptable euthanasia technique.
Freitag, Sonja
2014-01-01
Objectives: To examine the influence of the two following factors on the proportion of time that nurses spend in a forward-bending trunk posture: (i) the bed height during basic care activities at the bedside and (ii) the work method during basic care activities in the bathroom. A further aim was to examine the connection between the proportion of time spent in a forward-bending posture and the perceived exertion. Methods: Twelve nurses in a geriatric nursing home each performed a standardized care routine at the bedside and in the bathroom. The CUELA (German abbreviation for ‘computer-assisted recording and long-term analysis of musculoskeletal loads’) measuring system was used to record all trunk inclinations. Each participant conducted three tests with the bed at different heights (knee height, thigh height, and hip height) and in the bathroom, three tests were performed with different work methods (standing, kneeling, and sitting). After each test, participants rated their perceived exertion on the 15-point Borg scale (6 = no exertion at all and 20 = exhaustion). Results: If the bed was raised from knee to thigh level, the proportion of time spent in an upright position increased by 8.2% points. However, the effect was not significant (P = 0.193). Only when the bed was raised to hip height, there was a significant increase of 19.8% points (reference: thigh level; P = 0.003) and 28.0% points (reference: knee height; P < 0.001). Bathroom tests: compared with the standing work method, the kneeling and sitting work methods led to a significant increase in the proportion of time spent in an upright posture, by 19.4% points (P = 0.003) and 25.7% points (P < 0.001), respectively. The greater the proportion of time spent in an upright position, the lower the Borg rating (P < 0.001) awarded. Conclusions: The higher the proportion of time that nursing personnel work in an upright position, the less strenuous they perceive the work to be. Raising the bed to hip height and using a stool in the bathroom significantly increase the proportion of time that nursing personnel work in an upright position. Nursing staff can spend a considerably greater proportion of their time in an ergonomic posture if stools and height-adjustable beds are provided in healthcare institutions. PMID:24371043
Analysis of Giardin expression during encystation of Giardia lamblia
USDA-ARS?s Scientific Manuscript database
The present study analyzed giardin transcription in trophozoites and cysts during encystation of Giardia lamblia. Encystment was induced using standard methods, and the number of trophozoites and cysts were counted at various time-points during encystation. At all time points, RNA from both stages...
Joint classification and contour extraction of large 3D point clouds
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2017-08-01
We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia
2017-07-28
Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.
NASA Astrophysics Data System (ADS)
Spaans, K.; Hooper, A. J.
2017-12-01
The short revisit time and high data acquisition rates of current satellites have resulted in increased interest in the development of deformation monitoring and rapid disaster response capability, using InSAR. Fast, efficient data processing methodologies are required to deliver the timely results necessary for this, and also to limit computing resources required to process the large quantities of data being acquired. Contrary to volcano or earthquake applications, urban monitoring requires high resolution processing, in order to differentiate movements between buildings, or between buildings and the surrounding land. Here we present Rapid time series InSAR (RapidSAR), a method that can efficiently update high resolution time series of interferograms, and demonstrate its effectiveness over urban areas. The RapidSAR method estimates the coherence of pixels on an interferogram-by-interferogram basis. This allows for rapid ingestion of newly acquired images without the need to reprocess the earlier acquired part of the time series. The coherence estimate is based on ensembles of neighbouring pixels with similar amplitude behaviour through time, which are identified on an initial set of interferograms, and need be re-evaluated only occasionally. By taking into account scattering properties of points during coherence estimation, a high quality coherence estimate is achieved, allowing point selection at full resolution. The individual point selection maximizes the amount of information that can be extracted from each interferogram, as no selection compromise has to be reached between high and low coherence interferograms. In other words, points do not have to be coherent throughout the time series to contribute to the deformation time series. We demonstrate the effectiveness of our method over urban areas in the UK. We show how the algorithm successfully extracts high density time series from full resolution Sentinel-1 interferograms, and distinguish clearly between buildings and surrounding vegetation or streets. The fact that new interferograms can be processed separately from the remainder of the time series helps manage the high data volumes, both in space and time, generated by current missions.
NASA Astrophysics Data System (ADS)
Niwase, Hiroaki; Takada, Naoki; Araki, Hiromitsu; Maeda, Yuki; Fujiwara, Masato; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2016-09-01
Parallel calculations of large-pixel-count computer-generated holograms (CGHs) are suitable for multiple-graphics processing unit (multi-GPU) cluster systems. However, it is not easy for a multi-GPU cluster system to accomplish fast CGH calculations when CGH transfers between PCs are required. In these cases, the CGH transfer between the PCs becomes a bottleneck. Usually, this problem occurs only in multi-GPU cluster systems with a single spatial light modulator. To overcome this problem, we propose a simple method using the InfiniBand network. The computational speed of the proposed method using 13 GPUs (NVIDIA GeForce GTX TITAN X) was more than 3000 times faster than that of a CPU (Intel Core i7 4770) when the number of three-dimensional (3-D) object points exceeded 20,480. In practice, we achieved ˜40 tera floating point operations per second (TFLOPS) when the number of 3-D object points exceeded 40,960. Our proposed method was able to reconstruct a real-time movie of a 3-D object comprising 95,949 points.
Horn, W; Miksch, S; Egghart, G; Popow, C; Paky, F
1997-09-01
Real-time systems for monitoring and therapy planning, which receive their data from on-line monitoring equipment and computer-based patient records, require reliable data. Data validation has to utilize and combine a set of fast methods to detect, eliminate, and repair faulty data, which may lead to life-threatening conclusions. The strength of data validation results from the combination of numerical and knowledge-based methods applied to both continuously-assessed high-frequency data and discontinuously-assessed data. Dealing with high-frequency data, examining single measurements is not sufficient. It is essential to take into account the behavior of parameters over time. We present time-point-, time-interval-, and trend-based methods for validation and repair. These are complemented by time-independent methods for determining an overall reliability of measurements. The data validation benefits from the temporal data-abstraction process, which provides automatically derived qualitative values and patterns. The temporal abstraction is oriented on a context-sensitive and expectation-guided principle. Additional knowledge derived from domain experts forms an essential part for all of these methods. The methods are applied in the field of artificial ventilation of newborn infants. Examples from the real-time monitoring and therapy-planning system VIE-VENT illustrate the usefulness and effectiveness of the methods.
Partitioning of functional gene expression data using principal points.
Kim, Jaehee; Kim, Haseong
2017-10-12
DNA microarrays offer motivation and hope for the simultaneous study of variations in multiple genes. Gene expression is a temporal process that allows variations in expression levels with a characterized gene function over a period of time. Temporal gene expression curves can be treated as functional data since they are considered as independent realizations of a stochastic process. This process requires appropriate models to identify patterns of gene functions. The partitioning of the functional data can find homogeneous subgroups of entities for the massive genes within the inherent biological networks. Therefor it can be a useful technique for the analysis of time-course gene expression data. We propose a new self-consistent partitioning method of functional coefficients for individual expression profiles based on the orthonormal basis system. A principal points based functional partitioning method is proposed for time-course gene expression data. The method explores the relationship between genes using Legendre coefficients as principal points to extract the features of gene functions. Our proposed method provides high connectivity in connectedness after clustering for simulated data and finds a significant subsets of genes with the increased connectivity. Our approach has comparative advantages that fewer coefficients are used from the functional data and self-consistency of principal points for partitioning. As real data applications, we are able to find partitioned genes through the gene expressions found in budding yeast data and Escherichia coli data. The proposed method benefitted from the use of principal points, dimension reduction, and choice of orthogonal basis system as well as provides appropriately connected genes in the resulting subsets. We illustrate our method by applying with each set of cell-cycle-regulated time-course yeast genes and E. coli genes. The proposed method is able to identify highly connected genes and to explore the complex dynamics of biological systems in functional genomics.
Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System
NASA Astrophysics Data System (ADS)
Chan, T. O.; Lichti, D. D.; Belton, D.
2013-10-01
At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
Floating-to-Fixed-Point Conversion for Digital Signal Processors
NASA Astrophysics Data System (ADS)
Menard, Daniel; Chillet, Daniel; Sentieys, Olivier
2006-12-01
Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.
NASA Astrophysics Data System (ADS)
Hadi Sutrisno, Himawan; Kiswanto, Gandjar; Istiyanto, Jos
2017-06-01
The rough machining is aimed at shaping a workpiece towards to its final form. This process takes up a big proportion of the machining time due to the removal of the bulk material which may affect the total machining time. In certain models, the rough machining has limitations especially on certain surfaces such as turbine blade and impeller. CBV evaluation is one of the concepts which is used to detect of areas admissible in the process of machining. While in the previous research, CBV area detection used a pair of normal vectors, in this research, the writer simplified the process to detect CBV area with a slicing line for each point cloud formed. The simulation resulted in three steps used for this method and they are: 1. Triangulation from CAD design models, 2. Development of CC point from the point cloud, 3. The slicing line method which is used to evaluate each point cloud position (under CBV and outer CBV). The result of this evaluation method can be used as a tool for orientation set-up on each CC point position of feasible areas in rough machining.
Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.
Zhao, Dongfang; Yang, Li
2009-01-01
Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.
Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics
NASA Astrophysics Data System (ADS)
Kohira, K.; Masuda, H.
2017-09-01
A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-01-01
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-12-24
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
Piecewise multivariate modelling of sequential metabolic profiling data.
Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan
2008-02-19
Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.
Legibility Evaluation Using Point-of-regard Measurement
NASA Astrophysics Data System (ADS)
Saito, Daisuke; Saito, Keiichi; Saito, Masao
Web site visibility has become important because of the rapid dissemination of World Wide Web, and combinations of foreground and background colors are crucial in providing high visibility. In our previous studies, the visibilities of several web-safe color combinations were examined using a psychological method. In those studies, simple stimuli were used because of experimental restriction. In this paper, legibility of sentences on web sites was examined using a psychophisiological method, point-of-regard measurement, to obtain other practical data. Ten people with normal color sensations ranging from ages 21 to 29 were recruited. The number of characters per line in each page was arranged in the same number, and the four representative achromatic web-safe colors, that is, #000000, #666666, #999999 and #CCCCCC, were examined. The reading time per character and the gaze time per line were obtained from point-of-regard measurement, and the normalized with the reading time and the gaze time of the three colors were calculated and compared. As the results, it was shown that the time of reading and gaze become long at the same ratio when the contrast decreases by point-of-regard measurement. Therefore, it was indicated that the legibility of color combinations could be estimated by point-of-regard measurement.
Four-body trajectory optimization
NASA Technical Reports Server (NTRS)
Pu, C. L.; Edelbaum, T. N.
1973-01-01
A collection of typical three-body trajectories from the L1 libration point on the sun-earth line to the earth is presented. These trajectories in the sun-earth system are grouped into four distinct families which differ in transfer time and delta V requirements. Curves showing the variations of delta V with respect to transfer time, and typical two and three-impulse primer vector histories, are included. The development of a four-body trajectory optimization program to compute fuel optimal trajectories between the earth and a point in the sun-earth-moon system are also discussed. Methods for generating fuel optimal two-impulse trajectories which originate at the earth or a point in space, and fuel optimal three-impulse trajectories between two points in space, are presented. A brief qualitative comparison of these methods is given. An example of a four-body two-impulse transfer from the Li libration point to the earth is included.
Memory persistency and nonlinearity in daily mean dew point across India
NASA Astrophysics Data System (ADS)
Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik; Bhattacharjee, Anup Kumar
2016-04-01
Enterprising endeavour has been taken in this work to realize and estimate the persistence in memory of the daily mean dew point time series obtained from seven different weather stations viz. Kolkata, Chennai (Madras), New Delhi, Mumbai (Bombay), Bhopal, Agartala and Ahmedabad representing different geographical zones in India. Hurst exponent values reveal an anti-persistent behaviour of these dew point series. To affirm the Hurst exponent values, five different scaling methods have been used and the corresponding results are compared to synthesize a finer and reliable conclusion out of it. The present analysis also bespeaks that the variation in daily mean dew point is governed by a non-stationary process with stationary increments. The delay vector variance (DVV) method has been exploited to investigate nonlinearity, and the present calculation confirms the presence of deterministic nonlinear profile in the daily mean dew point time series of the seven stations.
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications
Moussa, Adel; El-Sheimy, Naser; Habib, Ayman
2017-01-01
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.
Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman
2017-10-18
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.
A novel point cloud registration using 2D image features
NASA Astrophysics Data System (ADS)
Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng
2017-01-01
Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.
Shao, Chenxi; Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong
2015-07-01
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Chenxi, E-mail: cxshao@ustc.edu.cn; Xue, Yong; Fang, Fang
2015-07-15
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedbackmore » control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.« less
Analyzing survival curves at a fixed point in time for paired and clustered right-censored data
Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De
2018-01-01
In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280
Jang, Hae-Won; Ih, Jeong-Guon
2013-03-01
The time domain boundary element method (TBEM) to calculate the exterior sound field using the Kirchhoff integral has difficulties in non-uniqueness and exponential divergence. In this work, a method to stabilize TBEM calculation for the exterior problem is suggested. The time domain CHIEF (Combined Helmholtz Integral Equation Formulation) method is newly formulated to suppress low order fictitious internal modes. This method constrains the surface Kirchhoff integral by forcing the pressures at the additional interior points to be zero when the shortest retarded time between boundary nodes and an interior point elapses. However, even after using the CHIEF method, the TBEM calculation suffers the exponential divergence due to the remaining unstable high order fictitious modes at frequencies higher than the frequency limit of the boundary element model. For complete stabilization, such troublesome modes are selectively adjusted by projecting the time response onto the eigenspace. In a test example for a transiently pulsating sphere, the final average error norm of the stabilized response compared to the analytic solution is 2.5%.
A new template matching method based on contour information
NASA Astrophysics Data System (ADS)
Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong
2014-11-01
Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.
Lie Symmetry Analysis and Explicit Solutions of the Time Fractional Fifth-Order KdV Equation
Wang, Gang wei; Xu, Tian zhou; Feng, Tao
2014-01-01
In this paper, using the Lie group analysis method, we study the invariance properties of the time fractional fifth-order KdV equation. A systematic research to derive Lie point symmetries to time fractional fifth-order KdV equation is performed. In the sense of point symmetry, all of the vector fields and the symmetry reductions of the fractional fifth-order KdV equation are obtained. At last, by virtue of the sub-equation method, some exact solutions to the fractional fifth-order KdV equation are provided. PMID:24523885
Research on a high-precision calibration method for tunable lasers
NASA Astrophysics Data System (ADS)
Xiang, Na; Li, Zhengying; Gui, Xin; Wang, Fan; Hou, Yarong; Wang, Honghai
2018-03-01
Tunable lasers are widely used in the field of optical fiber sensing, but nonlinear tuning exists even for zero external disturbance and limits the accuracy of the demodulation. In this paper, a high-precision calibration method for tunable lasers is proposed. A comb filter is introduced and the real-time output wavelength and scanning rate of the laser are calibrated by linear fitting several time-frequency reference points obtained from it, while the beat signal generated by the auxiliary interferometer is interpolated and frequency multiplied to find more accurate zero crossing points, with these points being used as wavelength counters to resample the comb signal to correct the nonlinear effect, which ensures that the time-frequency reference points of the comb filter are linear. A stability experiment and a strain sensing experiment verify the calibration precision of this method. The experimental result shows that the stability and wavelength resolution of the FBG demodulation can reach 0.088 pm and 0.030 pm, respectively, using a tunable laser calibrated by the proposed method. We have also compared the demodulation accuracy in the presence or absence of the comb filter, with the result showing that the introduction of the comb filter results to a 15-fold wavelength resolution enhancement.
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Ballari, Rajashekhar V; Martin, Asha; Gowda, Lalitha R
2013-01-01
Brinjal is an important vegetable crop. Major crop loss of brinjal is due to insect attack. Insect-resistant EE-1 brinjal has been developed and is awaiting approval for commercial release. Consumer health concerns and implementation of international labelling legislation demand reliable analytical detection methods for genetically modified (GM) varieties. End-point and real-time polymerase chain reaction (PCR) methods were used to detect EE-1 brinjal. In end-point PCR, primer pairs specific to 35S CaMV promoter, NOS terminator and nptII gene common to other GM crops were used. Based on the revealed 3' transgene integration sequence, primers specific for the event EE-1 brinjal were designed. These primers were used for end-point single, multiplex and SYBR-based real-time PCR. End-point single PCR showed that the designed primers were highly specific to event EE-1 with a sensitivity of 20 pg of genomic DNA, corresponding to 20 copies of haploid EE-1 brinjal genomic DNA. The limits of detection and quantification for SYBR-based real-time PCR assay were 10 and 100 copies respectively. The prior development of detection methods for this important vegetable crop will facilitate compliance with any forthcoming labelling regulations. Copyright © 2012 Society of Chemical Industry.
Non-Linear Harmonic flow simulations of a High-Head Francis Turbine test case
NASA Astrophysics Data System (ADS)
Lestriez, R.; Amet, E.; Tartinville, B.; Hirsch, C.
2016-11-01
This work investigates the use of the non-linear harmonic (NLH) method for a high- head Francis turbine, the Francis99 workshop test case. The NLH method relies on a Fourier decomposition of the unsteady flow components in harmonics of Blade Passing Frequencies (BPF), which are the fundamentals of the periodic disturbances generated by the adjacent blade rows. The unsteady flow solution is obtained by marching in pseudo-time to a steady-state solution of the transport equations associated with the time-mean, the BPFs and their harmonics. Thanks to this transposition into frequency domain, meshing only one blade channel is sufficient, like for a steady flow simulation. Notable benefits in terms of computing costs and engineering time can therefore be obtained compared to classical time marching approach using sliding grid techniques. The method has been applied for three operating points of the Francis99 workshop high-head Francis turbine. Steady and NLH flow simulations have been carried out for these configurations. Impact of the grid size and near-wall refinement is analysed on all operating points for steady simulations and for Best Efficiency Point (BEP) for NLH simulations. Then, NLH results for a selected grid size are compared for the three different operating points, reproducing the tendencies observed in the experiment.
A new method for photon transport in Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Sato, T.; Ogawa, K.
1999-12-01
Monte Carlo methods are used to evaluate data methods such as scatter and attenuation compensation in single photon emission CT (SPECT), treatment planning in radiation therapy, and in many industrial applications. In Monte Carlo simulation, photon transport requires calculating the distance from the location of the emitted photon to the nearest boundary of each uniform attenuating medium along its path of travel, and comparing this distance with the length of its path generated at emission. Here, the authors propose a new method that omits the calculation of the location of the exit point of the photon from each voxel and of the distance between the exit point and the original position. The method only checks the medium of each voxel along the photon's path. If the medium differs from that in the voxel from which the photon was emitted, the authors calculate the location of the entry point in the voxel, and the length of the path is compared with the mean free path length generated by a random number. Simulations using the MCAT phantom show that the ratios of the calculation time were 1.0 for the voxel-based method, and 0.51 for the proposed method with a 256/spl times/256/spl times/256 matrix image, thereby confirming the effectiveness of the algorithm.
FPFH-based graph matching for 3D point cloud registration
NASA Astrophysics Data System (ADS)
Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua
2018-04-01
Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
Júnez-Ferreira, H E; Herrera, G S
2013-04-01
This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.
Homogenising time series: Beliefs, dogmas and facts
NASA Astrophysics Data System (ADS)
Domonkos, P.
2010-09-01
For obtaining reliable information about climate change and climate variability the use of high quality data series is essentially important, and one basic tool of quality improvements is the statistical homogenisation of observed time series. In the recent decades large number of homogenisation methods has been developed, but the real effects of their application on time series are still not known entirely. The ongoing COST HOME project (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. The author believes that several old theoretical rules have to be re-evaluated. Some examples of the hot questions, a) Statistically detected change-points can be accepted only with the confirmation of metadata information? b) Do semi-hierarchic algorithms for detecting multiple change-points in time series function effectively in practise? c) Is it good to limit the spatial comparison of candidate series with up to five other series in the neighbourhood? Empirical results - those from the COST benchmark, and other experiments too - show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities seem like part of the climatic variability, thus the pure application of classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality, than in raw time series. The developers and users of homogenisation methods have to bear in mind that the eventual purpose of homogenisation is not to find change-points, but to have the observed time series with statistical properties those characterise well the climate change and climate variability.
GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition
NASA Astrophysics Data System (ADS)
Zhen, Z.; Jia, X.
2014-12-01
Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the speedup ratio time consumption of RTM is 11.5. At the same time, the accuracy of imaging is not harmed. Another advantage of the GPUs-GPP method is its easy applications in other numerical methods such as the FEM. Finally, in the GPUs-GPP method, the arrays require quite limited memory storage, which makes the method promising in dealing with large-scale 3D problems.
John F. Caratti
2006-01-01
The FIREMON Point Intercept (PO) method is used to assess changes in plant species cover or ground cover for a macroplot. This method uses a narrow diameter sampling pole or sampling pins, placed at systematic intervals along line transects to sample within plot variation and quantify statistically valid changes in plant species cover and height over time. Plant...
ERIC Educational Resources Information Center
Finch, W. Holmes; Shim, Sungok Serena
2018-01-01
Collection and analysis of longitudinal data is an important tool in understanding growth and development over time in a whole range of human endeavors. Ideally, researchers working in the longitudinal framework are able to collect data at more than two points in time, as this will provide them with the potential for a deeper understanding of the…
Ricker, Martin; Peña Ramírez, Víctor M.; von Rosen, Dietrich
2014-01-01
Growth curves are monotonically increasing functions that measure repeatedly the same subjects over time. The classical growth curve model in the statistical literature is the Generalized Multivariate Analysis of Variance (GMANOVA) model. In order to model the tree trunk radius (r) over time (t) of trees on different sites, GMANOVA is combined here with the adapted PL regression model Q = A·T+E, where for and for , A = initial relative growth to be estimated, , and E is an error term for each tree and time point. Furthermore, Ei[–b·r] = , , with TPR being the turning point radius in a sigmoid curve, and at is an estimated calibrating time-radius point. Advantages of the approach are that growth rates can be compared among growth curves with different turning point radiuses and different starting points, hidden outliers are easily detectable, the method is statistically robust, and heteroscedasticity of the residuals among time points is allowed. The model was implemented with dendrochronological data of 235 Pinus montezumae trees on ten Mexican volcano sites to calculate comparison intervals for the estimated initial relative growth . One site (at the Popocatépetl volcano) stood out, with being 3.9 times the value of the site with the slowest-growing trees. Calculating variance components for the initial relative growth, 34% of the growth variation was found among sites, 31% among trees, and 35% over time. Without the Popocatépetl site, the numbers changed to 7%, 42%, and 51%. Further explanation of differences in growth would need to focus on factors that vary within sites and over time. PMID:25402427
Freitag, Sonja; Seddouki, Rachida; Dulon, Madeleine; Kersten, Jan Felix; Larsson, Tore J; Nienhaus, Albert
2014-04-01
To examine the influence of the two following factors on the proportion of time that nurses spend in a forward-bending trunk posture: (i) the bed height during basic care activities at the bedside and (ii) the work method during basic care activities in the bathroom. A further aim was to examine the connection between the proportion of time spent in a forward-bending posture and the perceived exertion. Twelve nurses in a geriatric nursing home each performed a standardized care routine at the bedside and in the bathroom. The CUELA (German abbreviation for 'computer-assisted recording and long-term analysis of musculoskeletal loads') measuring system was used to record all trunk inclinations. Each participant conducted three tests with the bed at different heights (knee height, thigh height, and hip height) and in the bathroom, three tests were performed with different work methods (standing, kneeling, and sitting). After each test, participants rated their perceived exertion on the 15-point Borg scale (6 = no exertion at all and 20 = exhaustion). If the bed was raised from knee to thigh level, the proportion of time spent in an upright position increased by 8.2% points. However, the effect was not significant (P = 0.193). Only when the bed was raised to hip height, there was a significant increase of 19.8% points (reference: thigh level; P = 0.003) and 28.0% points (reference: knee height; P < 0.001). Bathroom tests: compared with the standing work method, the kneeling and sitting work methods led to a significant increase in the proportion of time spent in an upright posture, by 19.4% points (P = 0.003) and 25.7% points (P < 0.001), respectively. The greater the proportion of time spent in an upright position, the lower the Borg rating (P < 0.001) awarded. The higher the proportion of time that nursing personnel work in an upright position, the less strenuous they perceive the work to be. Raising the bed to hip height and using a stool in the bathroom significantly increase the proportion of time that nursing personnel work in an upright position. Nursing staff can spend a considerably greater proportion of their time in an ergonomic posture if stools and height-adjustable beds are provided in healthcare institutions.
Time-dependent spectral renormalization method
NASA Astrophysics Data System (ADS)
Cole, Justin T.; Musslimani, Ziad H.
2017-11-01
The spectral renormalization method was introduced by Ablowitz and Musslimani (2005) as an effective way to numerically compute (time-independent) bound states for certain nonlinear boundary value problems. In this paper, we extend those ideas to the time domain and introduce a time-dependent spectral renormalization method as a numerical means to simulate linear and nonlinear evolution equations. The essence of the method is to convert the underlying evolution equation from its partial or ordinary differential form (using Duhamel's principle) into an integral equation. The solution sought is then viewed as a fixed point in both space and time. The resulting integral equation is then numerically solved using a simple renormalized fixed-point iteration method. Convergence is achieved by introducing a time-dependent renormalization factor which is numerically computed from the physical properties of the governing evolution equation. The proposed method has the ability to incorporate physics into the simulations in the form of conservation laws or dissipation rates. This novel scheme is implemented on benchmark evolution equations: the classical nonlinear Schrödinger (NLS), integrable PT symmetric nonlocal NLS and the viscous Burgers' equations, each of which being a prototypical example of a conservative and dissipative dynamical system. Numerical implementation and algorithm performance are also discussed.
Earth observing system instrument pointing control modeling for polar orbiting platforms
NASA Technical Reports Server (NTRS)
Briggs, H. C.; Kia, T.; Mccabe, S. A.; Bell, C. E.
1987-01-01
An approach to instrument pointing control performance assessment for large multi-instrument platforms is described. First, instrument pointing requirements and reference platform control systems for the Eos Polar Platforms are reviewed. Performance modeling tools including NASTRAN models of two large platforms, a modal selection procedure utilizing a balanced realization method, and reduced order platform models with core and instrument pointing control loops added are then described. Time history simulations of instrument pointing and stability performance in response to commanded slewing of adjacent instruments demonstrates the limits of tolerable slew activity. Simplified models of rigid body responses are also developed for comparison. Instrument pointing control methods required in addition to the core platform control system to meet instrument pointing requirements are considered.
Recurrence Density Enhanced Complex Networks for Nonlinear Time Series Analysis
NASA Astrophysics Data System (ADS)
Costa, Diego G. De B.; Reis, Barbara M. Da F.; Zou, Yong; Quiles, Marcos G.; Macau, Elbert E. N.
We introduce a new method, which is entitled Recurrence Density Enhanced Complex Network (RDE-CN), to properly analyze nonlinear time series. Our method first transforms a recurrence plot into a figure of a reduced number of points yet preserving the main and fundamental recurrence properties of the original plot. This resulting figure is then reinterpreted as a complex network, which is further characterized by network statistical measures. We illustrate the computational power of RDE-CN approach by time series by both the logistic map and experimental fluid flows, which show that our method distinguishes different dynamics sufficiently well as the traditional recurrence analysis. Therefore, the proposed methodology characterizes the recurrence matrix adequately, while using a reduced set of points from the original recurrence plots.
A fast image matching algorithm based on key points
NASA Astrophysics Data System (ADS)
Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng
2014-05-01
Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.
Selecting the most appropriate time points to profile in high-throughput studies
Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv
2017-01-01
Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972
Investigating the Accuracy of Point Clouds Generated for Rock Surfaces
NASA Astrophysics Data System (ADS)
Seker, D. Z.; Incekara, A. H.
2016-12-01
Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.
Research of real-time communication software
NASA Astrophysics Data System (ADS)
Li, Maotang; Guo, Jingbo; Liu, Yuzhong; Li, Jiahong
2003-11-01
Real-time communication has been playing an increasingly important role in our work, life and ocean monitor. With the rapid progress of computer and communication technique as well as the miniaturization of communication system, it is needed to develop the adaptable and reliable real-time communication software in the ocean monitor system. This paper involves the real-time communication software research based on the point-to-point satellite intercommunication system. The object-oriented design method is adopted, which can transmit and receive video data and audio data as well as engineering data by satellite channel. In the real-time communication software, some software modules are developed, which can realize the point-to-point satellite intercommunication in the ocean monitor system. There are three advantages for the real-time communication software. One is that the real-time communication software increases the reliability of the point-to-point satellite intercommunication system working. Second is that some optional parameters are intercalated, which greatly increases the flexibility of the system working. Third is that some hardware is substituted by the real-time communication software, which not only decrease the expense of the system and promotes the miniaturization of communication system, but also aggrandizes the agility of the system.
Rotation and scale change invariant point pattern relaxation matching by the Hopfield neural network
NASA Astrophysics Data System (ADS)
Sang, Nong; Zhang, Tianxu
1997-12-01
Relaxation matching is one of the most relevant methods for image matching. The original relaxation matching technique using point patterns is sensitive to rotations and scale changes. We improve the original point pattern relaxation matching technique to be invariant to rotations and scale changes. A method that makes the Hopfield neural network perform this matching process is discussed. An advantage of this is that the relaxation matching process can be performed in real time with the neural network's massively parallel capability to process information. Experimental results with large simulated images demonstrate the effectiveness and feasibility of the method to perform point patten relaxation matching invariant to rotations and scale changes and the method to perform this matching by the Hopfield neural network. In addition, we show that the method presented can be tolerant to small random error.
NASA Astrophysics Data System (ADS)
Muhiddin, F. A.; Sulaiman, J.
2017-09-01
The aim of this paper is to investigate the effectiveness of the Successive Over-Relaxation (SOR) iterative method by using the fourth-order Crank-Nicolson (CN) discretization scheme to derive a five-point Crank-Nicolson approximation equation in order to solve diffusion equation. From this approximation equation, clearly, it can be shown that corresponding system of five-point approximation equations can be generated and then solved iteratively. In order to access the performance results of the proposed iterative method with the fourth-order CN scheme, another point iterative method which is Gauss-Seidel (GS), also presented as a reference method. Finally the numerical results obtained from the use of the fourth-order CN discretization scheme, it can be pointed out that the SOR iterative method is superior in terms of number of iterations, execution time, and maximum absolute error.
Ensemble Space-Time Correlation of Plasma Turbulence in the Solar Wind.
Matthaeus, W H; Weygand, J M; Dasso, S
2016-06-17
Single point measurement turbulence cannot distinguish variations in space and time. We employ an ensemble of one- and two-point measurements in the solar wind to estimate the space-time correlation function in the comoving plasma frame. The method is illustrated using near Earth spacecraft observations, employing ACE, Geotail, IMP-8, and Wind data sets. New results include an evaluation of both correlation time and correlation length from a single method, and a new assessment of the accuracy of the familiar frozen-in flow approximation. This novel view of the space-time structure of turbulence may prove essential in exploratory space missions such as Solar Probe Plus and Solar Orbiter for which the frozen-in flow hypothesis may not be a useful approximation.
An automated model-based aim point distribution system for solar towers
NASA Astrophysics Data System (ADS)
Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen
2016-05-01
Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.
2011-01-01
Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598
Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp
2011-08-18
Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.
Alignment of time-resolved data from high throughput experiments.
Abidi, Nada; Franke, Raimo; Findeisen, Peter; Klawonn, Frank
2016-12-01
To better understand the dynamics of the underlying processes in cells, it is necessary to take measurements over a time course. Modern high-throughput technologies are often used for this purpose to measure the behavior of cell products like metabolites, peptides, proteins, [Formula: see text]RNA or mRNA at different points in time. Compared to classical time series, the number of time points is usually very limited and the measurements are taken at irregular time intervals. The main reasons for this are the costs of the experiments and the fact that the dynamic behavior usually shows a strong reaction and fast changes shortly after a stimulus and then slowly converges to a certain stable state. Another reason might simply be missing values. It is common to repeat the experiments and to have replicates in order to carry out a more reliable analysis. The ideal assumptions that the initial stimulus really started exactly at the same time for all replicates and that the replicates are perfectly synchronized are seldom satisfied. Therefore, there is a need to first adjust or align the time-resolved data before further analysis is carried out. Dynamic time warping (DTW) is considered as one of the common alignment techniques for time series data with equidistant time points. In this paper, we modified the DTW algorithm so that it can align sequences with measurements at different, non-equidistant time points with large gaps in between. This type of data is usually known as time-resolved data characterized by irregular time intervals between measurements as well as non-identical time points for different replicates. This new algorithm can be easily used to align time-resolved data from high-throughput experiments and to come across existing problems such as time scarcity and existing noise in the measurements. We propose a modified method of DTW to adapt requirements imposed by time-resolved data by use of monotone cubic interpolation splines. Our presented approach provides a nonlinear alignment of two sequences that neither need to have equi-distant time points nor measurements at identical time points. The proposed method is evaluated with artificial as well as real data. The software is available as an R package tra (Time-Resolved data Alignment) which is freely available at: http://public.ostfalia.de/klawonn/tra.zip .
An approach of point cloud denoising based on improved bilateral filtering
NASA Astrophysics Data System (ADS)
Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin
2018-04-01
An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.
Burnstein, Bryan D; Steele, Russell J; Shrier, Ian
2011-01-01
Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Cohort study. Eighteen different Cirque du Soleil shows. Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators.
A new comparison method for dew-point generators
NASA Astrophysics Data System (ADS)
Heinonen, Martti
1999-12-01
A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.
Development of a piecewise linear omnidirectional 3D image registration method
NASA Astrophysics Data System (ADS)
Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo
2016-12-01
This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.
A quantitative evaluation of two methods for preserving hair samples
Roon, David A.; Waits, L.P.; Kendall, K.C.
2003-01-01
Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.
GIS Based System for Post-Earthquake Crisis Managment Using Cellular Network
NASA Astrophysics Data System (ADS)
Raeesi, M.; Sadeghi-Niaraki, A.
2013-09-01
Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the destroyed areas. The search and rescue phase usually is maintained for many days. Time reduction for surviving people is very important. A Geographical Information System (GIS) can be used for decreasing response time and management in critical situations. Position estimation in short period of time time is important. This paper proposes a GIS based system for post-earthquake disaster management solution. This system relies on several mobile positioning methods such as cell-ID and TA method, signal strength method, angel of arrival method, time of arrival method and time difference of arrival method. For quick positioning, the system can be helped by any person who has a mobile device. After positioning and specifying the critical points, the points are sent to a central site for managing the procedure of quick response for helping. This solution establishes a quick way to manage the post-earthquake crisis.
Research on facial expression simulation based on depth image
NASA Astrophysics Data System (ADS)
Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao
2017-11-01
Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.
Application of closed-form solutions to a mesh point field in silicon solar cells
NASA Technical Reports Server (NTRS)
Lamorte, M. F.
1985-01-01
A computer simulation method is discussed that provides for equivalent simulation accuracy, but that exhibits significantly lower CPU running time per bias point compared to other techniques. This new method is applied to a mesh point field as is customary in numerical integration (NI) techniques. The assumption of a linear approximation for the dependent variable, which is typically used in the finite difference and finite element NI methods, is not required. Instead, the set of device transport equations is applied to, and the closed-form solutions obtained for, each mesh point. The mesh point field is generated so that the coefficients in the set of transport equations exhibit small changes between adjacent mesh points. Application of this method to high-efficiency silicon solar cells is described; and the method by which Auger recombination, ambipolar considerations, built-in and induced electric fields, bandgap narrowing, carrier confinement, and carrier diffusivities are treated. Bandgap narrowing has been investigated using Fermi-Dirac statistics, and these results show that bandgap narrowing is more pronounced and that it is temperature-dependent in contrast to the results based on Boltzmann statistics.
Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range
NASA Technical Reports Server (NTRS)
Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.
NASA Astrophysics Data System (ADS)
Okumura, Hiroshi; Suezaki, Masashi; Sueyasu, Hideki; Arai, Kohei
2003-03-01
An automated method that can select corresponding point candidates is developed. This method has the following three features: 1) employment of the RIN-net for corresponding point candidate selection; 2) employment of multi resolution analysis with Haar wavelet transformation for improvement of selection accuracy and noise tolerance; 3) employment of context information about corresponding point candidates for screening of selected candidates. Here, the 'RIN-net' means the back-propagation trained feed-forward 3-layer artificial neural network that feeds rotation invariants as input data. In our system, pseudo Zernike moments are employed as the rotation invariants. The RIN-net has N x N pixels field of view (FOV). Some experiments are conducted to evaluate corresponding point candidate selection capability of the proposed method by using various kinds of remotely sensed images. The experimental results show the proposed method achieves fewer training patterns, less training time, and higher selection accuracy than conventional method.
Real-time implementation of camera positioning algorithm based on FPGA & SOPC
NASA Astrophysics Data System (ADS)
Yang, Mingcao; Qiu, Yuehong
2014-09-01
In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.
Imaging study on acupuncture points
NASA Astrophysics Data System (ADS)
Yan, X. H.; Zhang, X. Y.; Liu, C. L.; Dang, R. S.; Ando, M.; Sugiyama, H.; Chen, H. S.; Ding, G. H.
2009-09-01
The topographic structures of acupuncture points were investigated by using the synchrotron radiation based Dark Field Image (DFI) method. Four following acupuncture points were studied: Sanyinjiao, Neiguan, Zusanli and Tianshu. We have found that at acupuncture point regions there exists the accumulation of micro-vessels. The images taken in the surrounding tissue out of the acupuncture points do not show such kind of structure. It is the first time to reveal directly the specific structure of acupuncture points by X-ray imaging.
Application of change-point problem to the detection of plant patches.
López, I; Gámez, M; Garay, J; Standovár, T; Varga, Z
2010-03-01
In ecology, if the considered area or space is large, the spatial distribution of individuals of a given plant species is never homogeneous; plants form different patches. The homogeneity change in space or in time (in particular, the related change-point problem) is an important research subject in mathematical statistics. In the paper, for a given data system along a straight line, two areas are considered, where the data of each area come from different discrete distributions, with unknown parameters. In the paper a method is presented for the estimation of the distribution change-point between both areas and an estimate is given for the distributions separated by the obtained change-point. The solution of this problem will be based on the maximum likelihood method. Furthermore, based on an adaptation of the well-known bootstrap resampling, a method for the estimation of the so-called change-interval is also given. The latter approach is very general, since it not only applies in the case of the maximum-likelihood estimation of the change-point, but it can be also used starting from any other change-point estimation known in the ecological literature. The proposed model is validated against typical ecological situations, providing at the same time a verification of the applied algorithms.
NASA Astrophysics Data System (ADS)
Moskal, P.; Zoń, N.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kamińska, D.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Raczyński, L.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Zieliński, M.
2015-03-01
A novel method of hit time and hit position reconstruction in scintillator detectors is described. The method is based on comparison of detector signals with results stored in a library of synchronized model signals registered for a set of well-defined positions of scintillation points. The hit position is reconstructed as the one corresponding to the signal from the library which is most similar to the measurement signal. The time of the interaction is determined as a relative time between the measured signal and the most similar one in the library. A degree of similarity of measured and model signals is defined as the distance between points representing the measurement- and model-signal in the multi-dimensional measurement space. Novelty of the method lies also in the proposed way of synchronization of model signals enabling direct determination of the difference between time-of-flights (TOF) of annihilation quanta from the annihilation point to the detectors. The introduced method was validated using experimental data obtained by means of the double strip prototype of the J-PET detector and 22Na sodium isotope as a source of annihilation gamma quanta. The detector was built out from plastic scintillator strips with dimensions of 5 mm×19 mm×300 mm, optically connected at both sides to photomultipliers, from which signals were sampled by means of the Serial Data Analyzer. Using the introduced method, the spatial and TOF resolution of about 1.3 cm (σ) and 125 ps (σ) were established, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less
Real-time seam tracking control system based on line laser visions
NASA Astrophysics Data System (ADS)
Zou, Yanbiao; Wang, Yanbo; Zhou, Weilin; Chen, Xiangzhi
2018-07-01
A set of six-degree-of-freedom robotic welding automatic tracking platform was designed in this study to realize the real-time tracking of weld seams. Moreover, the feature point tracking method and the adaptive fuzzy control algorithm in the welding process were studied and analyzed. A laser vision sensor and its measuring principle were designed and studied, respectively. Before welding, the initial coordinate values of the feature points were obtained using morphological methods. After welding, the target tracking method based on Gaussian kernel was used to extract the real-time feature points of the weld. An adaptive fuzzy controller was designed to input the deviation value of the feature points and the change rate of the deviation into the controller. The quantization factors, scale factor, and weight function were adjusted in real time. The input and output domains, fuzzy rules, and membership functions were constantly updated to generate a series of smooth bias robot voltage. Three groups of experiments were conducted on different types of curve welds in a strong arc and splash noise environment using the welding current of 120 A short-circuit Metal Active Gas (MAG) Arc Welding. The tracking error was less than 0.32 mm and the sensor's metrical frequency can be up to 20 Hz. The end of the torch run smooth during welding. Weld trajectory can be tracked accurately, thereby satisfying the requirements of welding applications.
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
Real-Time Single Frequency Precise Point Positioning Using SBAS Corrections
Li, Liang; Jia, Chun; Zhao, Lin; Cheng, Jianhua; Liu, Jianxu; Ding, Jicheng
2016-01-01
Real-time single frequency precise point positioning (PPP) is a promising technique for high-precision navigation with sub-meter or even centimeter-level accuracy because of its convenience and low cost. The navigation performance of single frequency PPP heavily depends on the real-time availability and quality of correction products for satellite orbits and satellite clocks. Satellite-based augmentation system (SBAS) provides the correction products in real-time, but they are intended to be used for wide area differential positioning at 1 meter level precision. By imposing the constraints for ionosphere error, we have developed a real-time single frequency PPP method by sufficiently utilizing SBAS correction products. The proposed PPP method are tested with static and kinematic data, respectively. The static experimental results show that the position accuracy of the proposed PPP method can reach decimeter level, and achieve an improvement of at least 30% when compared with the traditional SBAS method. The positioning convergence of the proposed PPP method can be achieved in 636 epochs at most in static mode. In the kinematic experiment, the position accuracy of the proposed PPP method can be improved by at least 20 cm relative to the SBAS method. Furthermore, it has revealed that the proposed PPP method can achieve decimeter level convergence within 500 s in the kinematic mode. PMID:27517930
Factors That Influence the Rating of Perceived Exertion After Endurance Training.
Roos, Lilian; Taube, Wolfgang; Tuch, Carolin; Frei, Klaus Michael; Wyss, Thomas
2018-03-15
Session rating of perceived exertion (sRPE) is an often used measure to assess athletes' training load. However, little is known which factors could optimize the quality of data collection thereof. The aim of the present study was to investigate the effects of (i) the survey methods and (ii) the time points when sRPE was assessed on the correlation between subjective (sRPE) and objective (heart rate training impulse; TRIMP) assessment of training load. In the first part, 45 well-trained subjects (30 men, 15 women) performed 20 running sessions with a heart rate monitor and reported sRPE 30 minutes after training cessation. For the reporting the subjects were grouped into three survey method groups (paper-pencil, online questionnaire, and mobile device). In the second part of the study, another 40 athletes (28 men, 12 women) performed 4x5 running sessions with the four time points to report the sRPE randomly assigned (directly after training cessation, 30 minutes post-exercise, in the evening of the same day, the next morning directly after waking up). The assessment of sRPE is influenced by time point, survey method, TRIMP, sex, and training type. It is recommended to assess sRPE values via a mobile device or online tool, as the survey method "paper" displayed lower correlations between sRPE and TRIMP. Subjective training load measures are highly individual. When compared at the same relative intensity, lower sRPE values were reported by women, for the training types representing slow runs, and for time points with greater duration between training cessation and sRPE assessment. The assessment method for sRPE should be kept constant for each athlete and comparisons between athletes or sexes are not recommended.
Ideal evolution of magnetohydrodynamic turbulence when imposing Taylor-Green symmetries.
Brachet, M E; Bustamante, M D; Krstulovic, G; Mininni, P D; Pouquet, A; Rosenberg, D
2013-01-01
We investigate the ideal and incompressible magnetohydrodynamic (MHD) equations in three space dimensions for the development of potentially singular structures. The methodology consists in implementing the fourfold symmetries of the Taylor-Green vortex generalized to MHD, leading to substantial computer time and memory savings at a given resolution; we also use a regridding method that allows for lower-resolution runs at early times, with no loss of spectral accuracy. One magnetic configuration is examined at an equivalent resolution of 6144(3) points and three different configurations on grids of 4096(3) points. At the highest resolution, two different current and vorticity sheet systems are found to collide, producing two successive accelerations in the development of small scales. At the latest time, a convergence of magnetic field lines to the location of maximum current is probably leading locally to a strong bending and directional variability of such lines. A novel analytical method, based on sharp analysis inequalities, is used to assess the validity of the finite-time singularity scenario. This method allows one to rule out spurious singularities by evaluating the rate at which the logarithmic decrement of the analyticity-strip method goes to zero. The result is that the finite-time singularity scenario cannot be ruled out, and the singularity time could be somewhere between t=2.33 and t=2.70. More robust conclusions will require higher resolution runs and grid-point interpolation measurements of maximum current and vorticity.
NASA Astrophysics Data System (ADS)
Kacem, S.; Eichwald, O.; Ducasse, O.; Renon, N.; Yousfi, M.; Charrada, K.
2012-01-01
Streamers dynamics are characterized by the fast propagation of ionized shock waves at the nanosecond scale under very sharp space charge variations. The streamer dynamics modelling needs the solution of charged particle transport equations coupled to the elliptic Poisson's equation. The latter has to be solved at each time step of the streamers evolution in order to follow the propagation of the resulting space charge electric field. In the present paper, a full multi grid (FMG) and a multi grid (MG) methods have been adapted to solve Poisson's equation for streamer discharge simulations between asymmetric electrodes. The validity of the FMG method for the computation of the potential field is first shown by performing direct comparisons with analytic solution of the Laplacian potential in the case of a point-to-plane geometry. The efficiency of the method is also compared with the classical successive over relaxation method (SOR) and MUltifrontal massively parallel solver (MUMPS). MG method is then applied in the case of the simulation of positive streamer propagation and its efficiency is evaluated from comparisons to SOR and MUMPS methods in the chosen point-to-plane configuration. Very good agreements are obtained between the three methods for all electro-hydrodynamics characteristics of the streamer during its propagation in the inter-electrode gap. However in the case of MG method, the computational time to solve the Poisson's equation is at least 2 times faster in our simulation conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hofschen, S.; Wolff, I.
1996-08-01
Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are comparedmore » with measurements and show good agreement.« less
NASA Technical Reports Server (NTRS)
Lim, Sang G.; Brewe, David E.; Prahl, Joseph M.
1990-01-01
The transient analysis of hydrodynamic lubrication of a point-contact is presented. A body-fitted coordinate system is introduced to transform the physical domain to a rectangular computational domain, enabling the use of the Newton-Raphson method for determining pressures and locating the cavitation boundary, where the Reynolds boundary condition is specified. In order to obtain the transient solution, an explicit Euler method is used to effect a time march. The transient dynamic load is a sinusoidal function of time with frequency, fractional loading, and mean load as parameters. Results include the variation of the minimum film thickness and phase-lag with time as functions of excitation frequency. The results are compared with the analytic solution to the transient step bearing problem with the same dynamic loading function. The similarities of the results suggest an approximate model of the point contact minimum film thickness solution.
Pairwise registration of TLS point clouds using covariance descriptors and a non-cooperative game
NASA Astrophysics Data System (ADS)
Zai, Dawei; Li, Jonathan; Guo, Yulan; Cheng, Ming; Huang, Pengdi; Cao, Xiaofei; Wang, Cheng
2017-12-01
It is challenging to automatically register TLS point clouds with noise, outliers and varying overlap. In this paper, we propose a new method for pairwise registration of TLS point clouds. We first generate covariance matrix descriptors with an adaptive neighborhood size from point clouds to find candidate correspondences, we then construct a non-cooperative game to isolate mutual compatible correspondences, which are considered as true positives. The method was tested on three models acquired by two different TLS systems. Experimental results demonstrate that our proposed adaptive covariance (ACOV) descriptor is invariant to rigid transformation and robust to noise and varying resolutions. The average registration errors achieved on three models are 0.46 cm, 0.32 cm and 1.73 cm, respectively. The computational times cost on these models are about 288 s, 184 s and 903 s, respectively. Besides, our registration framework using ACOV descriptors and a game theoretic method is superior to the state-of-the-art methods in terms of both registration error and computational time. The experiment on a large outdoor scene further demonstrates the feasibility and effectiveness of our proposed pairwise registration framework.
Effect of nose shape on three-dimensional stagnation region streamlines and heating rates
NASA Technical Reports Server (NTRS)
Hassan, Basil; Dejarnette, Fred R.; Zoby, E. V.
1991-01-01
A new method for calculating the three-dimensional inviscid surface streamlines and streamline metrics using Cartesian coordinates and time as the independent variable of integration has been developed. The technique calculates the streamline from a specified point on the body to a point near the stagnation point by using a prescribed pressure distribution in the Euler equations. The differential equations, which are singular at the stagnation point, are of the two point boundary value problem type. Laminar heating rates are calculated using the axisymmetric analog concept for three-dimensional boundary layers and approximate solutions to the axisymmetric boundary layer equations. Results for elliptic conic forebody geometries show that location of the point of maximum heating depends on the type of conic in the plane of symmetry and the angle of attack, and that this location is in general different from the stagnation point. The new method was found to give smooth predictions of heat transfer in the nose region where previous methods gave oscillatory results.
NASA Technical Reports Server (NTRS)
Mehra, R. K.; Washburn, R. B.; Sajan, S.; Carroll, J. V.
1979-01-01
A hierarchical real time algorithm for optimal three dimensional control of aircraft is described. Systematic methods are developed for real time computation of nonlinear feedback controls by means of singular perturbation theory. The results are applied to a six state, three control variable, point mass model of an F-4 aircraft. Nonlinear feedback laws are presented for computing the optimal control of throttle, bank angle, and angle of attack. Real Time capability is assessed on a TI 9900 microcomputer. The breakdown of the singular perturbation approximation near the terminal point is examined Continuation methods are examined to obtain exact optimal trajectories starting from the singular perturbation solutions.
Collocation and Galerkin Time-Stepping Methods
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2011-01-01
We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.
The short time Fourier transform and local signals
NASA Astrophysics Data System (ADS)
Okumura, Shuhei
In this thesis, I examine the theoretical properties of the short time discrete Fourier transform (STFT). The STFT is obtained by applying the Fourier transform by a fixed-sized, moving window to input series. We move the window by one time point at a time, so we have overlapping windows. I present several theoretical properties of the STFT, applied to various types of complex-valued, univariate time series inputs, and their outputs in closed forms. In particular, just like the discrete Fourier transform, the STFT's modulus time series takes large positive values when the input is a periodic signal. One main point is that a white noise time series input results in the STFT output being a complex-valued stationary time series and we can derive the time and time-frequency dependency structure such as the cross-covariance functions. Our primary focus is the detection of local periodic signals. I present a method to detect local signals by computing the probability that the squared modulus STFT time series has consecutive large values exceeding some threshold after one exceeding observation following one observation less than the threshold. We discuss a method to reduce the computation of such probabilities by the Box-Cox transformation and the delta method, and show that it works well in comparison to the Monte Carlo simulation method.
Reviving common standards in point-count surveys for broad inference across studies
Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.
2014-01-01
We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and analysis by regional and national data centers.
Precise Point Positioning technique for short and long baselines time transfer
NASA Astrophysics Data System (ADS)
Lejba, Pawel; Nawrocki, Jerzy; Lemanski, Dariusz; Foks-Ryznar, Anna; Nogas, Pawel; Dunst, Piotr
2013-04-01
In this work the clock parameters determination of several timing receivers TTS-4 (AOS), ASHTECH Z-XII3T (OP, ORB, PTB, USNO) and SEPTENTRIO POLARX4TR (ORB, since February 11, 2012) by use of the Precise Point Positioning (PPP) technique were presented. The clock parameters were determined for several time links based on the data delivered by time and frequency laboratories mentioned above. The computations cover the period from January 1 to December 31, 2012 and were performed in two modes with 7-day and one-month solution for all links. All RINEX data files which include phase and code GPS data were recorded in 30-second intervals. All calculations were performed by means of Natural Resource Canada's GPS Precise Point Positioning (GPS-PPP) software based on high-quality precise satellite coordinates and satellite clock delivered by IGS as the final products. The used independent PPP technique is a very powerful and simple method which allows for better control of antenna positions in AOS and a verification of other time transfer techniques like GPS CV, GLONASS CV and TWSTFT. The PPP technique is also a very good alternative for calibration of a glass fiber link PL-AOS realized at present by AOS. Currently PPP technique is one of the main time transfer methods used at AOS what considerably improve and strengthen the quality of the Polish time scales UTC(AOS), UTC(PL), and TA(PL). KEY-WORDS: Precise Point Positioning, time transfer, IGS products, GNSS, time scales.
Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.
2018-05-01
Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.
Computerized optimization of multiple isocentres in stereotactic convergent beam irradiation
NASA Astrophysics Data System (ADS)
Treuer, U.; Treuer, H.; Hoevels, M.; Müller, R. P.; Sturm, V.
1998-01-01
A method for the fully computerized determination and optimization of positions of target points and collimator sizes in convergent beam irradiation is presented. In conventional interactive trial and error methods, which are very time consuming, the treatment parameters are chosen according to the operator's experience and improved successively. This time is reduced significantly by the use of a computerized procedure. After the definition of target volume and organs at risk in the CT or MR scans, an initial configuration is created automatically. In the next step the target point positions and collimator diameters are optimized by the program. The aim of the optimization is to find a configuration for which a prescribed dose at the target surface is approximated as close as possible. At the same time dose peaks inside the target volume are minimized and organs at risk and tissue surrounding the target are spared. To enhance the speed of the optimization a fast method for approximate dose calculation in convergent beam irradiation is used. A possible application of the method for calculating the leaf positions when irradiating with a micromultileaf collimator is briefly discussed. The success of the procedure has been demonstrated for several clinical cases with up to six target points.
Confidence intervals for the first crossing point of two hazard functions.
Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng
2009-12-01
The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.
Common pitfalls in statistical analysis: The perils of multiple testing
Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc
2016-01-01
Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478
Assessment of Renal Ischemia By Optical Spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fitzgerald, J T; Demos, S; Michalopoulou, A
2004-01-07
Introduction: No reliable method currently exists for quantifying the degree of warm ischemia in kidney grafts prior to transplantation. We describe a method for evaluating pretransplant warm ischemia time using optical spectroscopic methods. Methods: Lewis rat kidney vascular pedicles were clamped unilaterally in vivo for 0, 5, 10, 20, 30, 60, 90 or 120 minutes; 8 animals were studied at each time point. Injured and contra-lateral control kidneys were then flushed with Euro-Collins solution, resected and placed on ice. 335 nm excitation autofluorescence as well as cross polarized light scattering images were taken of each injured and control kidney usingmore » filters of various wavelengths. The intensity ratio of the injured to normal kidneys was compared to ischemia time. Results: Autofluorescence intensity ratios through a 450 nm filter and light scattering intensity ratios through an 800 nm filter both decreased significantly with increasing ischemia time (p < 0.0001 for each method, one-way ANOVA). All adjacent and non-adjacent time points between 0 and 90 minutes were distinguishable using one of these two modalities by Fisher's PLSD. Conclusions: Optical spectroscopic methods can accurately quantify warm ischemia time in kidneys that have been subsequently hypothermically preserved. Further studies are needed to correlate results with physiological damage and posttransplant performance.« less
Compensatable muon collider calorimeter with manageable backgrounds
Raja, Rajendran
2015-02-17
A method and system for reducing background noise in a particle collider, comprises identifying an interaction point among a plurality of particles within a particle collider associated with a detector element, defining a trigger start time for each of the pixels as the time taken for light to travel from the interaction point to the pixel and a trigger stop time as a selected time after the trigger start time, and collecting only detections that occur between the start trigger time and the stop trigger time in order to thereafter compensate the result from the particle collider to reduce unwanted background detection.
NASA Technical Reports Server (NTRS)
Dingell, Charles W. (Inventor); Quintana, Clemente E. (Inventor); Le, Suy (Inventor); Clark, Michael R. (Inventor); Cloutier, Robert E. (Inventor); Hafermalz, David Scott (Inventor)
2009-01-01
A sublimator includes a sublimation plate having a thermal element disposed adjacent to a feed water channel and a control point disposed between at least a portion of the thermal element and a large pore substrate. The control point includes a sintered metal material. A method of dissipating heat using a sublimator includes a sublimation plate having a thermal element and a control point. The thermal element is disposed adjacent to a feed water channel and the control point is disposed between at least a portion of the thermal element and a large pore substrate. The method includes controlling a flow rate of feed water to the large pore substrate at the control point and supplying heated coolant to the thermal element. Sublimation occurs in the large pore substrate and the controlling of the flow rate of feed water is independent of time. A sublimator includes a sublimation plate having a thermal element disposed adjacent to a feed water channel and a control point disposed between at least a portion of the thermal element and a large pore substrate. The control point restricts a flow rate of feed water from the feed water channel to the large pore substrate independent of time.
Alomari, Yazan M.; MdZin, Reena Rahayu
2015-01-01
Analysis of whole-slide tissue for digital pathology images has been clinically approved to provide a second opinion to pathologists. Localization of focus points from Ki-67-stained histopathology whole-slide tissue microscopic images is considered the first step in the process of proliferation rate estimation. Pathologists use eye pooling or eagle-view techniques to localize the highly stained cell-concentrated regions from the whole slide under microscope, which is called focus-point regions. This procedure leads to a high variety of interpersonal observations and time consuming, tedious work and causes inaccurate findings. The localization of focus-point regions can be addressed as a clustering problem. This paper aims to automate the localization of focus-point regions from whole-slide images using the random patch probabilistic density method. Unlike other clustering methods, random patch probabilistic density method can adaptively localize focus-point regions without predetermining the number of clusters. The proposed method was compared with the k-means and fuzzy c-means clustering methods. Our proposed method achieves a good performance, when the results were evaluated by three expert pathologists. The proposed method achieves an average false-positive rate of 0.84% for the focus-point region localization error. Moreover, regarding RPPD used to localize tissue from whole-slide images, 228 whole-slide images have been tested; 97.3% localization accuracy was achieved. PMID:25793010
A New Test Unit for Disintegration End-Point Determination of Orodispersible Films.
Low, Ariana; Kok, Si Ling; Khong, Yuet Mei; Chan, Sui Yung; Gokhale, Rajeev
2015-11-01
No standard time or pharmacopoeia disintegration test method for orodispersible films (ODFs) exists. The USP disintegration test for tablets and capsules poses significant challenges for end-point determination when used for ODFs. We tested a newly developed disintegration test unit (DTU) against the USP disintegration test. The DTU is an accessory to the USP disintegration apparatus. It holds the ODF in a horizontal position, allowing top-view of the ODF during testing. A Gauge R&R study was conducted to assign relative contributions of the total variability from the operator, sample or the experimental set-up. Precision was compared using commercial ODF products in different media. Agreement between the two measurement methods was analysed. The DTU showed improved repeatability and reproducibility compared to the USP disintegration system with tighter standard deviations regardless of operator or medium. There is good agreement between the two methods, with the USP disintegration test giving generally longer disintegration times possibly due to difficulty in end-point determination. The DTU provided clear end-point determination and is suitable for quality control of ODFs during product developmental stage or manufacturing. This may facilitate the development of a standardized methodology for disintegration time determination of ODFs. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Choleau, C; Klein, J C; Reach, G; Aussedat, B; Demaria-Pesce, V; Wilson, G S; Gifford, R; Ward, W K
2002-08-01
Calibration, i.e. the transformation in real time of the signal I(t) generated by the glucose sensor at time t into an estimation of glucose concentration G(t), represents a key issue for the development of a continuous glucose monitoring system. To compare two calibration procedures. In the one-point calibration, which assumes that I(o) is negligible, S is simply determined as the ratio I/G, and G(t) = I(t)/S. The two-point calibration consists in the determination of a sensor sensitivity S and of a background current I(o) by plotting two values of the sensor signal versus the concomitant blood glucose concentrations. The subsequent estimation of G(t) is given by G(t) = (I(t)-I(o))/S. A glucose sensor was implanted in the abdominal subcutaneous tissue of nine type 1 diabetic patients during 3 (n = 2) and 7 days (n = 7). The one-point calibration was performed a posteriori either once per day before breakfast, or twice per day before breakfast and dinner, or three times per day before each meal. The two-point calibration was performed each morning during breakfast. The percentages of points present in zones A and B of the Clarke Error Grid were significantly higher when the system was calibrated using the one-point calibration. Use of two one-point calibrations per day before meals was virtually as accurate as three one-point calibrations. This study demonstrates the feasibility of a simple method for calibrating a continuous glucose monitoring system.
Stein, Marjorie W; Frank, Susan J; Roberts, Jeffrey H; Finkelstein, Malka; Heo, Moonseong
2016-05-01
The aim of this study was to determine whether group-based or didactic teaching is more effective to teach ACR Appropriateness Criteria to medical students. An identical pretest, posttest, and delayed multiple-choice test was used to evaluate the efficacy of the two teaching methods. Descriptive statistics comparing test scores were obtained. On the posttest, the didactic group gained 12.5 points (P < .0001), and the group-based learning students gained 16.3 points (P < .0001). On the delayed test, the didactic group gained 14.4 points (P < .0001), and the group-based learning students gained 11.8 points (P < .001). The gains in scores on both tests were statistically significant for both groups. However, the differences in scores were not statistically significant comparing the two educational methods. Compared with didactic lectures, group-based learning is more enjoyable, time efficient, and equally efficacious. The choice of educational method can be individualized for each institution on the basis of group size, time constraints, and faculty availability. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena
NASA Technical Reports Server (NTRS)
Zingg, David W.
1996-01-01
This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.
Venter, Anre; Maxwell, Scott E; Bolig, Erika
2002-06-01
Adding a pretest as a covariate to a randomized posttest-only design increases statistical power, as does the addition of intermediate time points to a randomized pretest-posttest design. Although typically 5 waves of data are required in this instance to produce meaningful gains in power, a 3-wave intensive design allows the evaluation of the straight-line growth model and may reduce the effect of missing data. The authors identify the statistically most powerful method of data analysis in the 3-wave intensive design. If straight-line growth is assumed, the pretest-posttest slope must assume fairly extreme values for the intermediate time point to increase power beyond the standard analysis of covariance on the posttest with the pretest as covariate, ignoring the intermediate time point.
Capesius, Joseph P.; Arnold, L. Rick
2012-01-01
The Mass Balance results were quite variable over time such that they appeared suspect with respect to the concept of groundwater flow as being gradual and slow. The large degree of variability in the day-to-day and month-to-month Mass Balance results is likely the result of many factors. These factors could include ungaged stream inflows or outflows, short-term streamflow losses to and gains from temporary bank storage, and any lag in streamflow accounting owing to streamflow lag time of flow within a reach. The Pilot Point time series results were much less variable than the Mass Balance results and extreme values were effectively constrained. Less day-to-day variability, smaller magnitude extreme values, and smoother transitions in base-flow estimates provided by the Pilot Point method are more consistent with a conceptual model of groundwater flow being gradual and slow. The Pilot Point method provided a better fit to the conceptual model of groundwater flow and appeared to provide reasonable estimates of base flow.
NASA Astrophysics Data System (ADS)
Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.
2016-06-01
We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.
Rabani, Eran; Reichman, David R.; Krilov, Goran; Berne, Bruce J.
2002-01-01
We present a method based on augmenting an exact relation between a frequency-dependent diffusion constant and the imaginary time velocity autocorrelation function, combined with the maximum entropy numerical analytic continuation approach to study transport properties in quantum liquids. The method is applied to the case of liquid para-hydrogen at two thermodynamic state points: a liquid near the triple point and a high-temperature liquid. Good agreement for the self-diffusion constant and for the real-time velocity autocorrelation function is obtained in comparison to experimental measurements and other theoretical predictions. Improvement of the methodology and future applications are discussed. PMID:11830656
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu
2017-01-01
The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979
Liu, Zhenbang; Ng, Junxiang; Yuwono, Arianto; Lu, Yadong; Tan, Yung Khan
2017-01-01
ABSTRACT Purpose: To compare the staining intensity of the upper urinary tract (UUT) urothelium among three UUT delivery methods in an in vivo porcine model. Materials and methods: A fluorescent dye solution (indigo carmine) was delivered to the UUT via three different methods: antegrade perfusion, vesico-ureteral reflux via in-dwelling ureteric stent and retrograde perfusion via a 5F open-ended ureteral catheter. Twelve renal units were tested with 4 in each method. After a 2-hour delivery time, the renal-ureter units were harvested en bloc. Time from harvesting to analysis was also standardised to be 2 hours in each arm. Three urothelium samples of the same weight and size were taken from each of the 6 pre-defined points (upper pole, mid pole, lower pole, renal pelvis, mid ureter and distal ureter) and the amount of fluorescence was measured with a spectrometer. Results: The mean fluorescence detected at all 6 predefined points of the UUT urothelium was the highest for the retrograde method. This was statistically significant with p-value less than <0.05 at all 6 points. Conclusions: Retrograde infusion of UUT by an open ended ureteral catheter resulted in highest mean fluorescence detected at all 6 pre-defined points of the UUT urothelium compared to antegrade infusion and vesico-ureteral reflux via indwelling ureteric stents indicating retrograde method ideal for topical therapy throughout the UUT urothelium. More clinical studies are needed to demonstrate if retrograde method could lead to better clinical outcomes compared to the other two methods. PMID:29039888
A new method of real-time detection of changes in periodic data stream
NASA Astrophysics Data System (ADS)
Lyu, Chen; Lu, Guoliang; Cheng, Bin; Zheng, Xiangwei
2017-07-01
The change point detection in periodic time series is much desirable in many practical usages. We present a novel algorithm for this task, which includes two phases: 1) anomaly measure- on the basis of a typical regression model, we propose a new computation method to measure anomalies in time series which does not require any reference data from other measurement(s); 2) change detection- we introduce a new martingale test for detection which can be operated in an unsupervised and nonparametric way. We have conducted extensive experiments to systematically test our algorithm. The results make us believe that our algorithm can be directly applicable in many real-world change-point-detection applications.
Pointo - a Low Cost Solution to Point Cloud Processing
NASA Astrophysics Data System (ADS)
Houshiar, H.; Winkler, S.
2017-11-01
With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.
NASA Astrophysics Data System (ADS)
Yang, C. H.; Kenduiywo, B. K.; Soergel, U.
2016-06-01
Persistent Scatterer Interferometry (PSI) is a technique to detect a network of extracted persistent scatterer (PS) points which feature temporal phase stability and strong radar signal throughout time-series of SAR images. The small surface deformations on such PS points are estimated. PSI particularly works well in monitoring human settlements because regular substructures of man-made objects give rise to large number of PS points. If such structures and/or substructures substantially alter or even vanish due to big change like construction, their PS points are discarded without additional explorations during standard PSI procedure. Such rejected points are called big change (BC) points. On the other hand, incoherent change detection (ICD) relies on local comparison of multi-temporal images (e.g. image difference, image ratio) to highlight scene modifications of larger size rather than detail level. However, image noise inevitably degrades ICD accuracy. We propose a change detection approach based on PSI to synergize benefits of PSI and ICD. PS points are extracted by PSI procedure. A local change index is introduced to quantify probability of a big change for each point. We propose an automatic thresholding method adopting change index to extract BC points along with a clue of the period they emerge. In the end, PS ad BC points are integrated into a change detection image. Our method is tested at a site located around north of Berlin main station where steady, demolished, and erected building substructures are successfully detected. The results are consistent with ground truth derived from time-series of aerial images provided by Google Earth. In addition, we apply our technique for traffic infrastructure, business district, and sports playground monitoring.
A real-time ionospheric model based on GNSS Precise Point Positioning
NASA Astrophysics Data System (ADS)
Tu, Rui; Zhang, Hongping; Ge, Maorong; Huang, Guanwen
2013-09-01
This paper proposes a method of real-time monitoring and modeling the ionospheric Total Electron Content (TEC) by Precise Point Positioning (PPP). Firstly, the ionospheric TEC and receiver’s Differential Code Biases (DCB) are estimated with the undifferenced raw observation in real-time, then the ionospheric TEC model is established based on the Single Layer Model (SLM) assumption and the recovered ionospheric TEC. In this study, phase observations with high precision are directly used instead of phase smoothed code observations. In addition, the DCB estimation is separated from the establishment of the ionospheric model which will limit the impacts of the SLM assumption impacts. The ionospheric model is established at every epoch for real time application. The method is validated with three different GNSS networks on a local, regional, and global basis. The results show that the method is feasible and effective, the real-time ionosphere and DCB results are very consistent with the IGS final products, with a bias of 1-2 TECU and 0.4 ns respectively.
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.
Detecting Abrupt Changes in a Piecewise Locally Stationary Time Series
Last, Michael; Shumway, Robert
2007-01-01
Non-stationary time series arise in many settings, such as seismology, speech-processing, and finance. In many of these settings we are interested in points where a model of local stationarity is violated. We consider the problem of how to detect these change-points, which we identify by finding sharp changes in the time-varying power spectrum. Several different methods are considered, and we find that the symmetrized Kullback-Leibler information discrimination performs best in simulation studies. We derive asymptotic normality of our test statistic, and consistency of estimated change-point locations. We then demonstrate the technique on the problem of detecting arrival phases in earthquakes. PMID:19190715
Ricker, Martin; Peña Ramírez, Víctor M; von Rosen, Dietrich
2014-01-01
Growth curves are monotonically increasing functions that measure repeatedly the same subjects over time. The classical growth curve model in the statistical literature is the Generalized Multivariate Analysis of Variance (GMANOVA) model. In order to model the tree trunk radius (r) over time (t) of trees on different sites, GMANOVA is combined here with the adapted PL regression model Q = A · T+E, where for b ≠ 0 : Q = Ei[-b · r]-Ei[-b · r1] and for b = 0 : Q = Ln[r/r1], A = initial relative growth to be estimated, T = t-t1, and E is an error term for each tree and time point. Furthermore, Ei[-b · r] = ∫(Exp[-b · r]/r)dr, b = -1/TPR, with TPR being the turning point radius in a sigmoid curve, and r1 at t1 is an estimated calibrating time-radius point. Advantages of the approach are that growth rates can be compared among growth curves with different turning point radiuses and different starting points, hidden outliers are easily detectable, the method is statistically robust, and heteroscedasticity of the residuals among time points is allowed. The model was implemented with dendrochronological data of 235 Pinus montezumae trees on ten Mexican volcano sites to calculate comparison intervals for the estimated initial relative growth A. One site (at the Popocatépetl volcano) stood out, with A being 3.9 times the value of the site with the slowest-growing trees. Calculating variance components for the initial relative growth, 34% of the growth variation was found among sites, 31% among trees, and 35% over time. Without the Popocatépetl site, the numbers changed to 7%, 42%, and 51%. Further explanation of differences in growth would need to focus on factors that vary within sites and over time.
Post-mortem chemical excitability of the iris should not be used for forensic death time diagnosis.
Koehler, Katja; Sehner, Susanne; Riemer, Martin; Gehl, Axel; Raupach, Tobias; Anders, Sven
2018-04-18
Post-mortem chemical excitability of the iris is one of the non-temperature-based methods in forensic diagnosis of the time since death. Although several authors reported on their findings, using different measurement methods, currently used time limits are based on a single dissertation which has recently been doubted to be applicable for forensic purpose. We investigated changes in pupil-iris ratio after application of acetylcholine (n = 79) or tropicamide (n = 58) and in controls at upper and lower time limits that are suggested in the current literature, using a digital photography-based measurement method with excellent reliability. We observed "positive," "negative," and "paradox" reactions in both intervention and control conditions at all investigated post-mortem time points, suggesting spontaneous changes in pupil size to be causative for the finding. According to our observations, post-mortem chemical excitability of the iris should not be used in forensic death time estimation, as results may cause false conclusions regarding the correct time point of death and might therefore be strongly misleading.
A cluster merging method for time series microarray with production values.
Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio
2014-09-01
A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.
A novel method of measuring the melting point of animal fats.
Lloyd, S S; Dawkins, S T; Dawkins, R L
2014-10-01
The melting point (TM) of fat is relevant to health, but available methods of determining TM are cumbersome. One of the standard methods of measuring TM for animal and vegetable fats is the slip point, also known as the open capillary method. This method is imprecise and not amenable to automation or mass testing. We have developed a technique for measuring TM of animal fat using the Rotor-Gene Q (Qiagen, Hilden, Germany). The assay has an intra-assay SD of 0.08°C. A single operator can extract and assay up to 250 samples of animal fat in 24 h, including the time to extract the fat from the adipose tissue. This technique will improve the quality of research into genetic and environmental contributions to fat composition of meat.
Use of precision time and time interval (PTTI)
NASA Technical Reports Server (NTRS)
Taylor, J. D.
1974-01-01
A review of range time synchronization methods are discussed as an important aspect of range operations. The overall capabilities of various missile ranges to determine precise time of day by synchronizing to available references and applying this time point to instrumentation for time interval measurements are described.
NASA Astrophysics Data System (ADS)
Ilieva, Tamara; Gekov, Svetoslav
2017-04-01
The Precise Point Positioning (PPP) method gives the users the opportunity to determine point locations using a single GNSS receiver. The accuracy of the determined by PPP point locations is better in comparison to the standard point positioning, due to the precise satellite orbit and clock corrections that are developed and maintained by the International GNSS Service (IGS). The aim of our current research is the accuracy assessment of the PPP method applied for surveys and tracking moving objects in GIS environment. The PPP data is collected by using preliminary developed by us software application that allows different sets of attribute data for the measurements and their accuracy to be used. The results from the PPP measurements are directly compared within the geospatial database to different other sets of terrestrial data - measurements obtained by total stations, real time kinematic and static GNSS.
Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation
NASA Astrophysics Data System (ADS)
An, Lu; Guo, Baolong
2018-03-01
Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).
A fast point-cloud computing method based on spatial symmetry of Fresnel field
NASA Astrophysics Data System (ADS)
Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui
2017-10-01
Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.
Advances in analytical methodologies to guide bioprocess engineering for bio-therapeutics.
Saldova, Radka; Kilcoyne, Michelle; Stöckmann, Henning; Millán Martín, Silvia; Lewis, Amanda M; Tuite, Catherine M E; Gerlach, Jared Q; Le Berre, Marie; Borys, Michael C; Li, Zheng Jian; Abu-Absi, Nicholas R; Leister, Kirk; Joshi, Lokesh; Rudd, Pauline M
2017-03-01
This study was performed to monitor the glycoform distribution of a recombinant antibody fusion protein expressed in CHO cells over the course of fed-batch bioreactor runs using high-throughput methods to accurately determine the glycosylation status of the cell culture and its product. Three different bioreactors running similar conditions were analysed at the same five time-points using the advanced methods described here. N-glycans from cell and secreted glycoproteins from CHO cells were analysed by HILIC-UPLC and MS, and the total glycosylation (both N- and O-linked glycans) secreted from the CHO cells were analysed by lectin microarrays. Cell glycoproteins contained mostly high mannose type N-linked glycans with some complex glycans; sialic acid was α-(2,3)-linked, galactose β-(1,4)-linked, with core fucose. Glycans attached to secreted glycoproteins were mostly complex with sialic acid α-(2,3)-linked, galactose β-(1,4)-linked, with mostly core fucose. There were no significant differences noted among the bioreactors in either the cell pellets or supernatants using the HILIC-UPLC method and only minor differences at the early time-points of days 1 and 3 by the lectin microarray method. In comparing different time-points, significant decreases in sialylation and branching with time were observed for glycans attached to both cell and secreted glycoproteins. Additionally, there was a significant decrease over time in high mannose type N-glycans from the cell glycoproteins. A combination of the complementary methods HILIC-UPLC and lectin microarrays could provide a powerful and rapid HTP profiling tool capable of yielding qualitative and quantitative data for a defined biopharmaceutical process, which would allow valuable near 'real-time' monitoring of the biopharmaceutical product. Copyright © 2016 Elsevier Inc. All rights reserved.
Liu, Zhenbang; Ng, Junxiang; Yuwono, Arianto; Lu, Yadong; Tan, Yung Khan
2017-01-01
To compare the staining intensity of the upper urinary tract (UUT) urothelium among three UUT delivery methods in an in vivo porcine model. A fluorescent dye solution (indigo carmine) was delivered to the UUT via three different methods: antegrade perfusion, vesico-ureteral reflux via indwelling ureteric stent and retrograde perfusion via a 5F open-ended ureteral catheter. Twelve renal units were tested with 4 in each method. After a 2-hour delivery time, the renal-ureter units were harvested en bloc. Time from harvesting to analysis was also standardised to be 2 hours in each arm. Three urothelium samples of the same weight and size were taken from each of the 6 pre-defined points (upper pole, mid pole, lower pole, renal pelvis, mid ureter and distal ureter) and the amount of fluorescence was measured with a spectrometer. The mean fluorescence detected at all 6 predefined points of the UUT urothelium was the highest for the retrograde method. This was statistically significant with p-value less than <0.05 at all 6 points. Retrograde infusion of UUT by an open ended ureteral catheter resulted in highest mean fluorescence detected at all 6 pre-defined points of the UUT urothelium compared to antegrade infusion and vesico-ureteral reflux via indwelling ureteric stents indicating retrograde method ideal for topical therapy throughout the UUT urothelium. More clinical studies are needed to demonstrate if retrograde method could lead to better clinical outcomes compared to the other two methods. Copyright® by the International Brazilian Journal of Urology.
Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.
2016-02-02
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
An Adaptive Cross-Architecture Combination Method for Graph Traversal
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Song, Shuaiwen; Kerbyson, Darren J.
2014-06-18
Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.
48 CFR 7.402 - Acquisition methods.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Acquisition methods. 7.402... ACQUISITION PLANNING Equipment Lease or Purchase 7.402 Acquisition methods. (a) Purchase method. (1) Generally, the purchase method is appropriate if the equipment will be used beyond the point in time when...
48 CFR 7.402 - Acquisition methods.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Acquisition methods. 7.402... ACQUISITION PLANNING Equipment Lease or Purchase 7.402 Acquisition methods. (a) Purchase method. (1) Generally, the purchase method is appropriate if the equipment will be used beyond the point in time when...
48 CFR 7.402 - Acquisition methods.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Acquisition methods. 7.402... ACQUISITION PLANNING Equipment Lease or Purchase 7.402 Acquisition methods. (a) Purchase method. (1) Generally, the purchase method is appropriate if the equipment will be used beyond the point in time when...
48 CFR 7.402 - Acquisition methods.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Acquisition methods. 7.402... ACQUISITION PLANNING Equipment Lease or Purchase 7.402 Acquisition methods. (a) Purchase method. (1) Generally, the purchase method is appropriate if the equipment will be used beyond the point in time when...
48 CFR 7.402 - Acquisition methods.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Acquisition methods. 7.402... ACQUISITION PLANNING Equipment Lease or Purchase 7.402 Acquisition methods. (a) Purchase method. (1) Generally, the purchase method is appropriate if the equipment will be used beyond the point in time when...
NASA Astrophysics Data System (ADS)
Doko, Tomoko; Chen, Wenbo; Higuchi, Hiroyoshi
2016-06-01
Satellite tracking technology has been used to reveal the migration patterns and flyways of migratory birds. In general, bird migration can be classified according to migration status. These statuses include the wintering period, spring migration, breeding period, and autumn migration. To determine the migration status, periods of these statuses should be individually determined, but there is no objective method to define 'a threshold date' for when an individual bird changes its status. The research objective is to develop an effective and objective method to determine threshold dates of migration status based on satellite-tracked data. The developed method was named the "MATCHED (Migratory Analytical Time Change Easy Detection) method". In order to demonstrate the method, data acquired from satellite-tracked Tundra Swans were used. MATCHED method is composed by six steps: 1) dataset preparation, 2) time frame creation, 3) automatic identification, 4) visualization of change points, 5) interpretation, and 6) manual correction. Accuracy was tested. In general, MATCHED method was proved powerful to identify the change points between migration status as well as stopovers. Nevertheless, identifying "exact" threshold dates is still challenging. Limitation and application of this method was discussed.
Time integration algorithms for the two-dimensional Euler equations on unstructured meshes
NASA Technical Reports Server (NTRS)
Slack, David C.; Whitaker, D. L.; Walters, Robert W.
1994-01-01
Explicit and implicit time integration algorithms for the two-dimensional Euler equations on unstructured grids are presented. Both cell-centered and cell-vertex finite volume upwind schemes utilizing Roe's approximate Riemann solver are developed. For the cell-vertex scheme, a four-stage Runge-Kutta time integration, a fourstage Runge-Kutta time integration with implicit residual averaging, a point Jacobi method, a symmetric point Gauss-Seidel method and two methods utilizing preconditioned sparse matrix solvers are presented. For the cell-centered scheme, a Runge-Kutta scheme, an implicit tridiagonal relaxation scheme modeled after line Gauss-Seidel, a fully implicit lower-upper (LU) decomposition, and a hybrid scheme utilizing both Runge-Kutta and LU methods are presented. A reverse Cuthill-McKee renumbering scheme is employed for the direct solver to decrease CPU time by reducing the fill of the Jacobian matrix. A comparison of the various time integration schemes is made for both first-order and higher order accurate solutions using several mesh sizes, higher order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The results obtained for a transonic flow over a circular arc suggest that the preconditioned sparse matrix solvers perform better than the other methods as the number of elements in the mesh increases.
NASA Astrophysics Data System (ADS)
Huynh, Benjamin Q.; Antropova, Natasha; Giger, Maryellen L.
2017-03-01
DCE-MRI datasets have a temporal aspect to them, resulting in multiple regions of interest (ROIs) per subject, based on contrast time points. It is unclear how the different contrast time points vary in terms of usefulness for computer-aided diagnosis tasks in conjunction with deep learning methods. We thus sought to compare the different DCE-MRI contrast time points with regard to how well their extracted features predict response to neoadjuvant chemotherapy within a deep convolutional neural network. Our dataset consisted of 561 ROIs from 64 subjects. Each subject was categorized as a non-responder or responder, determined by recurrence-free survival. First, features were extracted from each ROI using a convolutional neural network (CNN) pre-trained on non-medical images. Linear discriminant analysis classifiers were then trained on varying subsets of these features, based on their contrast time points of origin. Leave-one-out cross validation (by subject) was used to assess performance in the task of estimating probability of response to therapy, with area under the ROC curve (AUC) as the metric. The classifier trained on features from strictly the pre-contrast time point performed the best, with an AUC of 0.85 (SD = 0.033). The remaining classifiers resulted in AUCs ranging from 0.71 (SD = 0.028) to 0.82 (SD = 0.027). Overall, we found the pre-contrast time point to be the most effective at predicting response to therapy and that including additional contrast time points moderately reduces variance.
Burnstein, Bryan D.; Steele, Russell J.; Shrier, Ian
2011-01-01
Context: Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. Objective: To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Design: Cohort study. Setting: Eighteen different Cirque du Soleil shows. Patients or Other Participants: Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Intervention(s): Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. Main Outcome Measure(s): We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Results: Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Conclusions: Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators. PMID:22488138
Effect of Receiver Choosing on Point Positions Determination in Network RTK
NASA Astrophysics Data System (ADS)
Bulbul, Sercan; Inal, Cevat
2016-04-01
Nowadays, the developments in GNSS technique allow to determinate point positioning in real time. Initially, point positioning was determined by RTK (Real Time Kinematic) based on a reference station. But, to avoid systematic errors in this method, distance between the reference points and rover receiver must be shorter than10 km. To overcome this restriction in RTK method, the idea of setting more than one reference point had been suggested and, CORS (Continuously Operations Reference Systems) was put into practice. Today, countries like ABD, Germany, Japan etc. have set CORS network. CORS-TR network which has 146 reference points has also been established in 2009 in Turkey. In CORS-TR network, active CORS approach was adopted. In Turkey, CORS-TR reference stations covering whole country are interconnected and, the positions of these stations and atmospheric corrections are continuously calculated. In this study, in a selected point, RTK measurements based on CORS-TR, were made with different receivers (JAVAD TRIUMPH-1, TOPCON Hiper V, MAGELLAN PRoMark 500, PENTAX SMT888-3G, SATLAB SL-600) and with different correction techniques (VRS, FKP, MAC). In the measurements, epoch interval was taken as 5 seconds and measurement time as 1 hour. According to each receiver and each correction technique, means and differences between maximum and minimum values of measured coordinates, root mean squares in the directions of coordinate axis and 2D and 3D positioning precisions were calculated, the results were evaluated by statistical methods and the obtained graphics were interpreted. After evaluation of the measurements and calculations, for each receiver and each correction technique; the coordinate differences between maximum and minimum values were measured to be less than 8 cm, root mean squares in coordinate axis directions less than ±1.5 cm, 2D point positioning precisions less than ±1.5 cm and 3D point positioning precisions less than ±1.5 cm. In the measurement point, it has been concluded that VRS correction technique is generally better than other corrections techniques.
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-15
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Asgharzadeh, Hafez; Borazjani, Iman
2016-01-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
NASA Astrophysics Data System (ADS)
Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti
2016-04-01
Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the displacement fields. Displacement fields derived from both approaches are then combined and provide a better understanding of the landslide kinematics.
Li, Hui
2009-11-14
Linear response and variational treatment are formulated for Hartree-Fock (HF) and Kohn-Sham density functional theory (DFT) methods and combined discrete-continuum solvation models that incorporate self-consistently induced dipoles and charges. Due to the variational treatment, analytic nuclear gradients can be evaluated efficiently for these discrete and continuum solvation models. The forces and torques on the induced point dipoles and point charges can be evaluated using simple electrostatic formulas as for permanent point dipoles and point charges, in accordance with the electrostatic nature of these methods. Implementation and tests using the effective fragment potential (EFP, a polarizable force field) method and the conductorlike polarizable continuum model (CPCM) show that the nuclear gradients are as accurate as those in the gas phase HF and DFT methods. Using B3LYP/EFP/CPCM and time-dependent-B3LYP/EFP/CPCM methods, acetone S(0)-->S(1) excitation in aqueous solution is studied. The results are close to those from full B3LYP/CPCM calculations.
NASA Astrophysics Data System (ADS)
Ulug, R.; Ozludemir, M. T.
2016-12-01
After 2011, through the modernization process of GLONASS, the number of satellites increased rapidly. This progress has made the GLONASS the only fully operational system alternative to GPS in point positioning. So far, many researches have been conducted to investigate the contribution of GLONASS to point positioning considering different methods such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP). The latter one, PPP, is a method that performs precise position determination using a single GNSS receiver. PPP method has become very attractive since the early 2000s and it provided great advantages for engineering and scientific applications. However, PPP method needs at least 2 hours observation time and the required observation length may be longer depending on several factors, such as the number of satellites, satellite configuration etc. The more satellites, the less observation time. Nevertheless the impact of the number of satellites included must be known very well. In this study, to determine the contribution of GLONASS on PPP, GLONASS satellite observations were added one by one from 1 to 5 satellite in 2, 4 and 6 hours of observations. For this purpose, the data collected at the IGS site ISTA was used. Data processing has been done for Day of Year (DOY) 197 in 2016. 24 hours GPS observations have been processed by Bernese 5.2 PPP module and the output was selected as the reference while 2, 4 and 6 hours GPS and GPS/GLONASS observations have been processed by magic GNSS PPP module. The results clearly showed that GPS/GLONASS observations improved positional accuracy, precision, dilution of precision and convergence to the reference coordinates. In this context, coordinate differences between 24 hours GPS observations and 6 hours GPS/GLONASS observations have been obtained as less than 2 cm.
Pump-Probe Spectroscopy Using the Hadamard Transform.
Beddard, Godfrey S; Yorke, Briony A
2016-08-01
A new method of performing pump-probe experiments is proposed and experimentally demonstrated by a proof of concept on the millisecond scale. The idea behind this method is to measure the total probe intensity arising from several time points as a group, instead of measuring each time separately. These measurements are multiplexes that are then transformed into the true signal via multiplication with a binary Hadamard S matrix. Each group of probe pulses is determined by using the pattern of a row of the Hadamard S matrix and the experiment is completed by rotating this pattern by one step for each sample excitation until the original pattern is again produced. Thus to measure n time points, n excitation events are needed and n probe patterns each taken from the n × n S matrix. The time resolution is determined by the shortest time between the probe pulses. In principle, this method could be used over all timescales, instead of the conventional pump-probe method which uses delay lines for picosecond and faster time resolution, or fast detectors and oscilloscopes on longer timescales. This new method is particularly suitable for situations where the probe intensity is weak and/or the detector is noisy. When the detector is noisy, there is in principle a signal to noise advantage over conventional pump-probe methods. © The Author(s) 2016.
Hand-eye calibration for rigid laparoscopes using an invariant point.
Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J
2016-06-01
Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.
Space Technology 5 Multi-point Measurements of Near-Earth Magnetic Fields: Initial Results
NASA Technical Reports Server (NTRS)
Slavin, James A.; Le, G.; Strangeway, R. L.; Wang, Y.; Boardsen, S.A.; Moldwin, M. B.; Spence, H. E.
2007-01-01
The Space Technology 5 (ST-5) mission successfully placed three micro-satellites in a 300 x 4500 km dawn-dusk orbit on 22 March 2006. Each spacecraft carried a boom-mounted vector fluxgate magnetometer that returned highly sensitive and accurate measurements of the geomagnetic field. These data allow, for the first time, the separation of temporal and spatial variations in field-aligned current (FAC) perturbations measured in low-Earth orbit on time scales of approximately 10 sec to 10 min. The constellation measurements are used to directly determine field-aligned current sheet motion, thickness and current density. In doing so, we demonstrate two multi-point methods for the inference of FAC current density that have not previously been possible in low-Earth orbit; 1) the "standard method," based upon s/c velocity, but corrected for FAC current sheet motion, and 2) the "gradiometer method" which uses simultaneous magnetic field measurements at two points with known separation. Future studies will apply these methods to the entire ST-5 data set and expand to include geomagnetic field gradient analyses as well as field-aligned and ionospheric currents.
Design of sewage treatment system by applying fuzzy adaptive PID controller
NASA Astrophysics Data System (ADS)
Jin, Liang-Ping; Li, Hong-Chan
2013-03-01
In the sewage treatment system, the dissolved oxygen concentration control, due to its nonlinear, time-varying, large time delay and uncertainty, is difficult to establish the exact mathematical model. While the conventional PID controller only works with good linear not far from its operating point, it is difficult to realize the system control when the operating point far off. In order to solve the above problems, the paper proposed a method which combine fuzzy control with PID methods and designed a fuzzy adaptive PID controller based on S7-300 PLC .It employs fuzzy inference method to achieve the online tuning for PID parameters. The control algorithm by simulation and practical application show that the system has stronger robustness and better adaptability.
A time domain inverse dynamic method for the end point tracking control of a flexible manipulator
NASA Technical Reports Server (NTRS)
Kwon, Dong-Soo; Book, Wayne J.
1991-01-01
The inverse dynamic equation of a flexible manipulator was solved in the time domain. By dividing the inverse system equation into the causal part and the anticausal part, we calculated the torque and the trajectories of all state variables for a given end point trajectory. The interpretation of this method in the frequency domain was explained in detail using the two-sided Laplace transform and the convolution integral. The open loop control of the inverse dynamic method shows an excellent result in simulation. For real applications, a practical control strategy is proposed by adding a feedback tracking control loop to the inverse dynamic feedforward control, and its good experimental performance is presented.
Moire technique utilization for detection and measurement of scoliosis
NASA Astrophysics Data System (ADS)
Zawieska, Dorota; Podlasiak, Piotr
1993-02-01
Moire projection method enables non-contact measurement of the shape or deformation of different surfaces and constructions by fringe pattern analysis. The fringe map acquisition of the whole surface of the object under test is one of the main advantages compared with 'point by point' methods. The computer analyzes the shape of the whole surface and next user can selected different points or cross section of the object map. In this paper a few typical examples of an application of the moire technique in solving different medical problems will be presented. We will also present to you the equipment the moire pattern analysis is done in real time using the phase stepping method with CCD camera.
A Novel Health Evaluation Strategy for Multifunctional Self-Validating Sensors
Shen, Zhengguang; Wang, Qi
2013-01-01
The performance evaluation of sensors is very important in actual application. In this paper, a theory based on multi-variable information fusion is studied to evaluate the health level of multifunctional sensors. A novel conception of health reliability degree (HRD) is defined to indicate a quantitative health level, which is different from traditional so-called qualitative fault diagnosis. To evaluate the health condition from both local and global perspectives, the HRD of a single sensitive component at multiple time points and the overall multifunctional sensor at a single time point are defined, respectively. The HRD methodology is emphasized by using multi-variable data fusion technology coupled with a grey comprehensive evaluation method. In this method, to acquire the distinct importance of each sensitive unit and the sensitivity of different time points, the information entropy and analytic hierarchy process method are used, respectively. In order to verify the feasibility of the proposed strategy, a health evaluating experimental system for multifunctional self-validating sensors was designed. The five different health level situations have been discussed. Successful results show that the proposed method is feasible, the HRD could be used to quantitatively indicate the health level and it does have a fast response to the performance changes of multifunctional sensors. PMID:23291576
Albers, D. J.; Hripcsak, George
2012-01-01
A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database. PMID:22536009
Time Reversal Methods for Structural Health Monitoring of Metallic Structures Using Guided Waves
2011-09-01
measure elastic properties of thin isotropic materials and laminated composite plates. Two types of waves propagate a symmetric wave and antisymmetric...compare it to the original signal. In this time reversal procedure wave propagation from point-A to point-B and can be modeled as a convolution ...where * is the convolution operator and transducer transmit and receive transfer function are neglected for simplification. In the frequency
NASA Astrophysics Data System (ADS)
Nassiri, Isar; Lombardo, Rosario; Lauria, Mario; Morine, Melissa J.; Moyseos, Petros; Varma, Vijayalakshmi; Nolen, Greg T.; Knox, Bridgett; Sloper, Daniel; Kaput, Jim; Priami, Corrado
2016-07-01
The investigation of the complex processes involved in cellular differentiation must be based on unbiased, high throughput data processing methods to identify relevant biological pathways. A number of bioinformatics tools are available that can generate lists of pathways ranked by statistical significance (i.e. by p-value), while ideally it would be desirable to functionally score the pathways relative to each other or to other interacting parts of the system or process. We describe a new computational method (Network Activity Score Finder - NASFinder) to identify tissue-specific, omics-determined sub-networks and the connections with their upstream regulator receptors to obtain a systems view of the differentiation of human adipocytes. Adipogenesis of human SBGS pre-adipocyte cells in vitro was monitored with a transcriptomic data set comprising six time points (0, 6, 48, 96, 192, 384 hours). To elucidate the mechanisms of adipogenesis, NASFinder was used to perform time-point analysis by comparing each time point against the control (0 h) and time-lapse analysis by comparing each time point with the previous one. NASFinder identified the coordinated activity of seemingly unrelated processes between each comparison, providing the first systems view of adipogenesis in culture. NASFinder has been implemented into a web-based, freely available resource associated with novel, easy to read visualization of omics data sets and network modules.
USDA-ARS?s Scientific Manuscript database
Methods to monitor microbial contamination typically involve collecting discrete samples at specific time-points and analyzing for a single contaminant. While informative, many of these methods suffer from poor recovery rates and only provide a snapshot of the microbial load at the time of collectio...
Liu, Bao; Fan, Xiaoming; Huo, Shengnan; Zhou, Lili; Wang, Jun; Zhang, Hui; Hu, Mei; Zhu, Jianhua
2011-12-01
A method was established to analyse the overlapped chromatographic peaks based on the chromatographic-spectra data detected by the diode-array ultraviolet detector. In the method, the three-dimensional data were de-noised and normalized firstly; secondly the differences and clustering analysis of the spectra at different time points were calculated; then the purity of the whole chromatographic peak were analysed and the region were sought out in which the spectra of different time points were stable. The feature spectra were extracted from the spectrum-stable region as the basic foundation. The nonnegative least-square method was chosen to separate the overlapped peaks and get the flow curve which was based on the feature spectrum. The three-dimensional divided chromatographic-spectrum peak could be gained by the matrix operations of the feature spectra with the flow curve. The results displayed that this method could separate the overlapped peaks.
About one counterexample of applying method of splitting in modeling of plating processes
NASA Astrophysics Data System (ADS)
Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Korobova, I. L.
2018-05-01
The paper presents the main factors that affect the uniformity of the thickness distribution of plating on the surface of the product. The experimental search for the optimal values of these factors is expensive and time-consuming. The problem of adequate simulation of coating processes is very relevant. The finite-difference approximation using seven-point and five-point templates in combination with the splitting method is considered as solution methods for the equations of the model. To study the correctness of the solution of equations of the mathematical model by these methods, the experiments were conducted on plating with a flat anode and cathode, which relative position was not changed in the bath. The studies have shown that the solution using the splitting method was up to 1.5 times faster, but it did not give adequate results due to the geometric features of the task under the given boundary conditions.
Automated Approach to Very High-Order Aeroacoustic Computations. Revision
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2001-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
CI2 for creating and comparing confidence-intervals for time-series bivariate plots.
Mullineaux, David R
2017-02-01
Currently no method exists for calculating and comparing the confidence-intervals (CI) for the time-series of a bivariate plot. The study's aim was to develop 'CI2' as a method to calculate the CI on time-series bivariate plots, and to identify if the CI between two bivariate time-series overlap. The test data were the knee and ankle angles from 10 healthy participants running on a motorised standard-treadmill and non-motorised curved-treadmill. For a recommended 10+ trials, CI2 involved calculating 95% confidence-ellipses at each time-point, then taking as the CI the points on the ellipses that were perpendicular to the direction vector between the means of two adjacent time-points. Consecutive pairs of CI created convex quadrilaterals, and any overlap of these quadrilaterals at the same time or ±1 frame as a time-lag calculated using cross-correlations, indicated where the two time-series differed. CI2 showed no group differences between left and right legs on both treadmills, but the same legs between treadmills for all participants showed differences of less knee extension on the curved-treadmill before heel-strike. To improve and standardise the use of CI2 it is recommended to remove outlier time-series, use 95% confidence-ellipses, and scale the ellipse by the fixed Chi-square value as opposed to the sample-size dependent F-value. For practical use, and to aid in standardisation or future development of CI2, Matlab code is provided. CI2 provides an effective method to quantify the CI of bivariate plots, and to explore the differences in CI between two bivariate time-series. Copyright © 2016 Elsevier B.V. All rights reserved.
Statistical aspects of point count sampling
Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.
Marker Registration Technique for Handwritten Text Marker in Augmented Reality Applications
NASA Astrophysics Data System (ADS)
Thanaborvornwiwat, N.; Patanukhom, K.
2018-04-01
Marker registration is a fundamental process to estimate camera poses in marker-based Augmented Reality (AR) systems. We developed AR system that creates correspondence virtual objects on handwritten text markers. This paper presents a new method for registration that is robust for low-content text markers, variation of camera poses, and variation of handwritten styles. The proposed method uses Maximally Stable Extremal Regions (MSER) and polygon simplification for a feature point extraction. The experiment shows that we need to extract only five feature points per image which can provide the best registration results. An exhaustive search is used to find the best matching pattern of the feature points in two images. We also compared performance of the proposed method to some existing registration methods and found that the proposed method can provide better accuracy and time efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vargas, L.S.; Quintana, V.H.; Vannelli, A.
This paper deals with the use of Successive Linear Programming (SLP) for the solution of the Security-Constrained Economic Dispatch (SCED) problem. The authors tutorially describe an Interior Point Method (IPM) for the solution of Linear Programming (LP) problems, discussing important implementation issues that really make this method far superior to the simplex method. A study of the convergence of the SLP technique and a practical criterion to avoid oscillatory behavior in the iteration process are also proposed. A comparison of the proposed method with an efficient simplex code (MINOS) is carried out by solving SCED problems on two standard IEEEmore » systems. The results show that the interior point technique is reliable, accurate and more than two times faster than the simplex algorithm.« less
Groundwater flux estimation in streams: A thermal equilibrium approach
Zhou, Yan; Fox, Garey A.; Miller, Ron B.; Mollenhauer, Robert; Brewer, Shannon K.
2018-01-01
Stream and groundwater interactions play an essential role in regulating flow, temperature, and water quality for stream ecosystems. Temperature gradients have been used to quantify vertical water movement in the streambed since the 1960s, but advancements in thermal methods are still possible. Seepage runs are a method commonly used to quantify exchange rates through a series of streamflow measurements but can be labor and time intensive. The objective of this study was to develop and evaluate a thermal equilibrium method as a technique for quantifying groundwater flux using monitored stream water temperature at a single point and readily available hydrological and atmospheric data. Our primary assumption was that stream water temperature at the monitored point was at thermal equilibrium with the combination of all heat transfer processes, including mixing with groundwater. By expanding the monitored stream point into a hypothetical, horizontal one-dimensional thermal modeling domain, we were able to simulate the thermal equilibrium achieved with known atmospheric variables at the point and quantify unknown groundwater flux by calibrating the model to the resulting temperature signature. Stream water temperatures were monitored at single points at nine streams in the Ozark Highland ecoregion and five reaches of the Kiamichi River to estimate groundwater fluxes using the thermal equilibrium method. When validated by comparison with seepage runs performed at the same time and reach, estimates from the two methods agreed with each other with an R2 of 0.94, a root mean squared error (RMSE) of 0.08 (m/d) and a Nash–Sutcliffe efficiency (NSE) of 0.93. In conclusion, the thermal equilibrium method was a suitable technique for quantifying groundwater flux with minimal cost and simple field installation given that suitable atmospheric and hydrological data were readily available.
Groundwater flux estimation in streams: A thermal equilibrium approach
NASA Astrophysics Data System (ADS)
Zhou, Yan; Fox, Garey A.; Miller, Ron B.; Mollenhauer, Robert; Brewer, Shannon
2018-06-01
Stream and groundwater interactions play an essential role in regulating flow, temperature, and water quality for stream ecosystems. Temperature gradients have been used to quantify vertical water movement in the streambed since the 1960s, but advancements in thermal methods are still possible. Seepage runs are a method commonly used to quantify exchange rates through a series of streamflow measurements but can be labor and time intensive. The objective of this study was to develop and evaluate a thermal equilibrium method as a technique for quantifying groundwater flux using monitored stream water temperature at a single point and readily available hydrological and atmospheric data. Our primary assumption was that stream water temperature at the monitored point was at thermal equilibrium with the combination of all heat transfer processes, including mixing with groundwater. By expanding the monitored stream point into a hypothetical, horizontal one-dimensional thermal modeling domain, we were able to simulate the thermal equilibrium achieved with known atmospheric variables at the point and quantify unknown groundwater flux by calibrating the model to the resulting temperature signature. Stream water temperatures were monitored at single points at nine streams in the Ozark Highland ecoregion and five reaches of the Kiamichi River to estimate groundwater fluxes using the thermal equilibrium method. When validated by comparison with seepage runs performed at the same time and reach, estimates from the two methods agreed with each other with an R2 of 0.94, a root mean squared error (RMSE) of 0.08 (m/d) and a Nash-Sutcliffe efficiency (NSE) of 0.93. In conclusion, the thermal equilibrium method was a suitable technique for quantifying groundwater flux with minimal cost and simple field installation given that suitable atmospheric and hydrological data were readily available.
Low, Ariana; Kok, Si Ling; Khong, Yuetmei; Chan, Sui Yung; Gokhale, Rajeev
2015-11-01
No standard time or pharmacopoeia disintegration test method for orodispersible films (ODFs) exists. The USP disintegration test for tablets and capsules poses significant challenges for end-point determination when used for ODFs. We tested a newly developed disintegration test unit (DTU) against the USP disintegration test. The DTU is an accessory to the USP disintegration apparatus. It holds the ODF in a horizontal position, allowing top-view of the ODF during testing. A Gauge R&R study was conducted to assign relative contributions of the total variability from the operator, sample or the experimental set-up. Precision was compared using commercial ODF products in different media. Agreement between the two measurement methods was analysed. The DTU showed improved repeatability and reproducibility compared to the USP disintegration system with tighter standard deviations regardless of operator or medium. There is good agreement between the two methods, with the USP disintegration test giving generally longer disintegration times possibly due to difficulty in end-point determination. The DTU provided clear end-point determination and is suitable for quality control of ODFs during product developmental stage or manufacturing. This may facilitate the development of a standardized methodology for disintegration time determination of ODFs. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 104:3893-3903, 2015. Copyright © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
NASA Astrophysics Data System (ADS)
Monnier, F.; Vallet, B.; Paparoditis, N.; Papelard, J.-P.; David, N.
2013-10-01
This article presents a generic and efficient method to register terrestrial mobile data with imperfect location on a geographic database with better overall accuracy but less details. The registration method proposed in this paper is based on a semi-rigid point to plane ICP ("Iterative Closest Point"). The main applications of such registration is to improve existing geographic databases, particularly in terms of accuracy, level of detail and diversity of represented objects. Other applications include fine geometric modelling and fine façade texturing, object extraction such as trees, poles, road signs marks, facilities, vehicles, etc. The geopositionning system of mobile mapping systems is affected by GPS masks that are only partially corrected by an Inertial Navigation System (INS) which can cause an important drift. As this drift varies non-linearly, but slowly in time, it will be modelled by a translation defined as a piecewise linear function of time which variation over time will be minimized (rigidity term). For each iteration of the ICP, the drift is estimated in order to minimise the distance between laser points and planar model primitives (data attachment term). The method has been tested on real data (a scan of the city of Paris of 3.6 million laser points registered on a 3D model of approximately 71,400 triangles).
Track following of Ξ-hyperons in nuclear emulsion for the E07 experiment
NASA Astrophysics Data System (ADS)
Mishina, Akihiro; Nakazawa, Kazuma; Hoshino, Kaoru; Itonaga, Kazunori; Yoshida, Junya; Than Tint, Khin; Kyaw Soe, Myint; Kinbara, Shinji; Itoh, Hiroki; Endo, Yoko; Kobayashi, Hidetaka; Umehara, Kaori; Yokoyama, Hiroyuki; Nakashima, Daisuke; J-PARC E07 Collaboration
2014-09-01
Events of Double- Λ and Twin Single- Λ Hypernuclei are very important to understand Λ- Λ and Ξ--N interaction. We planned the E07 experiment to find Nuclear mass dependences of them with ten times higher statistics than before. In the experiment, the number of Ξ- hyperon stopping at rest is about ten thousands which is ten times larger than before. Such number of tracks for Ξ- hyperon candidates should be followed in nuclear emulsion plate up to their stopping point. To complete its job within one year, it is necessary for development of automated track following system. The important points for track following is Track connection in plate by plate. To carry out these points, we innovated image processing methods. Especially, we applied pattern match of K- beams for 2nd point. Position accuracy of this method was 1.4 +/-0.8 μm . If we succeed this application in about one minute for a track in each plate, all track following can be finished in one year.
[Professor DONG Gui-rong's experience for the treatment of peripheral facial paralysis].
Cao, Lian-Ying; Shen, Te-Li; Zhang, Wei; Chen, Si-Hui
2012-05-01
Professor DONG Gui-rong's theoretical principle and manipulation points for peripheral facial paralysis were introduced in details from the angels of syndrome differentiation, timing, acupoint prescription and needling methods. For the syndrome differentiation and timing, the professor emphasized to check the treatment timing and follow the symptoms, which should be treated by stages, besides, it was necessary to find and distinguish the reason and nature of diseases to have a combined treatment of tendons and muscles. For the acupoint prescription and needling methods, he has proposed that the acupoints selection should be compatible of distal and lacal, and made a best of Baihui (GV 20) to regulate the whole yang qi, also he has paid much attention to the needling methods and staging treatment. Under the consideration of late stage of peripheral facial paralysis, based on syndrome differentiation Back-shu points have been selected to regulate zang-fu function, should achieve much better therapeutic effect.
NASA Astrophysics Data System (ADS)
Benítez, Hernán D.; Ibarra-Castanedo, Clemente; Bendada, AbdelHakim; Maldague, Xavier; Loaiza, Humberto; Caicedo, Eduardo
2008-01-01
It is well known that the methods of thermographic non-destructive testing based on the thermal contrast are strongly affected by non-uniform heating at the surface. Hence, the results obtained from these methods considerably depend on the chosen reference point. The differential absolute contrast (DAC) method was developed to eliminate the need of determining a reference point that defined the thermal contrast with respect to an ideal sound area. Although, very useful at early times, the DAC accuracy decreases when the heat front approaches the sample rear face. We propose a new DAC version by explicitly introducing the sample thickness using the thermal quadrupoles theory and showing that the new DAC range of validity increases for long times while preserving the validity for short times. This new contrast is used for defect quantification in composite, Plexiglas™ and aluminum samples.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
The algorithm of fast image stitching based on multi-feature extraction
NASA Astrophysics Data System (ADS)
Yang, Chunde; Wu, Ge; Shi, Jing
2018-05-01
This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.
NASA Astrophysics Data System (ADS)
Xin, Meiting; Li, Bing; Yan, Xiao; Chen, Lei; Wei, Xiang
2018-02-01
A robust coarse-to-fine registration method based on the backpropagation (BP) neural network and shift window technology is proposed in this study. Specifically, there are three steps: coarse alignment between the model data and measured data, data simplification based on the BP neural network and point reservation in the contour region of point clouds, and fine registration with the reweighted iterative closest point algorithm. In the process of rough alignment, the initial rotation matrix and the translation vector between the two datasets are obtained. After performing subsequent simplification operations, the number of points can be reduced greatly. Therefore, the time and space complexity of the accurate registration can be significantly reduced. The experimental results show that the proposed method improves the computational efficiency without loss of accuracy.
Lifetime Prediction of IGBT in a STATCOM Using Modified-Graphical Rainflow Counting Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopi Reddy, Lakshmi Reddy; Tolbert, Leon M; Ozpineci, Burak
Rainflow algorithms are one of the best counting methods used in fatigue and failure analysis [17]. There have been many approaches to the rainflow algorithm, some proposing modifications. Graphical Rainflow Method (GRM) was proposed recently with a claim of faster execution times [10]. However, the steps of the graphical method of rainflow algorithm, when implemented, do not generate the same output as the four-point or ASTM standard algorithm. A modified graphical method is presented and discussed in this paper to overcome the shortcomings of graphical rainflow algorithm. A fast rainflow algorithm based on four-point algorithm but considering point comparison thanmore » range comparison is also presented. A comparison between the performances of the common rainflow algorithms [6-10], including the proposed methods, in terms of execution time, memory used, and efficiency, complexity, and load sequences is presented. Finally, the rainflow algorithm is applied to temperature data of an IGBT in assessing the lifetime of a STATCOM operating for power factor correction of the load. From 5-minute data load profiles available, the lifetime is estimated to be at 3.4 years.« less
Optical aberration correction for simple lenses via sparse representation
NASA Astrophysics Data System (ADS)
Cui, Jinlin; Huang, Wei
2018-04-01
Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.
Reflective and refractive objects for mixed reality.
Knecht, Martin; Traxler, Christoph; Winklhofer, Christoph; Wimmer, Michael
2013-04-01
In this paper, we present a novel rendering method which integrates reflective or refractive objects into a differential instant radiosity (DIR) framework usable for mixed-reality (MR) applications. This kind of objects are very special from the light interaction point of view, as they reflect and refract incident rays. Therefore they may cause high-frequency lighting effects known as caustics. Using instant-radiosity (IR) methods to approximate these high-frequency lighting effects would require a large amount of virtual point lights (VPLs) and is therefore not desirable due to real-time constraints. Instead, our approach combines differential instant radiosity with three other methods. One method handles more accurate reflections compared to simple cubemaps by using impostors. Another method is able to calculate two refractions in real-time, and the third method uses small quads to create caustic effects. Our proposed method replaces parts in light paths that belong to reflective or refractive objects using these three methods and thus tightly integrates into DIR. In contrast to previous methods which introduce reflective or refractive objects into MR scenarios, our method produces caustics that also emit additional indirect light. The method runs at real-time frame rates, and the results show that reflective and refractive objects with caustics improve the overall impression for MR scenarios.
Forecast Method of Solar Irradiance with Just-In-Time Modeling
NASA Astrophysics Data System (ADS)
Suzuki, Takanobu; Goto, Yusuke; Terazono, Takahiro; Wakao, Shinji; Oozeki, Takashi
PV power output mainly depends on the solar irradiance which is affected by various meteorological factors. So, it is required to predict solar irradiance in the future for the efficient operation of PV systems. In this paper, we develop a novel approach for solar irradiance forecast, in which we introduce to combine the black-box model (JIT Modeling) with the physical model (GPV data). We investigate the predictive accuracy of solar irradiance over wide controlled-area of each electric power company by utilizing the measured data on the 44 observation points throughout Japan offered by JMA and the 64 points around Kanto by NEDO. Finally, we propose the application forecast method of solar irradiance to the point which is difficulty in compiling the database. And we consider the influence of different GPV default time on solar irradiance prediction.
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
NASA Astrophysics Data System (ADS)
Salançon, Evelyne; Degiovanni, Alain; Lapena, Laurent; Morin, Roger
2018-04-01
An event-counting method using a two-microchannel plate stack in a low-energy electron point projection microscope is implemented. 15 μm detector spatial resolution, i.e., the distance between first-neighbor microchannels, is demonstrated. This leads to a 7 times better microscope resolution. Compared to previous work with neutrons [Tremsin et al., Nucl. Instrum. Methods Phys. Res., Sect. A 592, 374 (2008)], the large number of detection events achieved with electrons shows that the local response of the detector is mainly governed by the angle between the hexagonal structures of the two microchannel plates. Using this method in point projection microscopy offers the prospect of working with a greater source-object distance (350 nm instead of 50 nm), advancing toward atomic resolution.
Error Mitigation of Point-to-Point Communication for Fault-Tolerant Computing
NASA Technical Reports Server (NTRS)
Akamine, Robert L.; Hodson, Robert F.; LaMeres, Brock J.; Ray, Robert E.
2011-01-01
Fault tolerant systems require the ability to detect and recover from physical damage caused by the hardware s environment, faulty connectors, and system degradation over time. This ability applies to military, space, and industrial computing applications. The integrity of Point-to-Point (P2P) communication, between two microcontrollers for example, is an essential part of fault tolerant computing systems. In this paper, different methods of fault detection and recovery are presented and analyzed.
Nutaro, James; Kuruganti, Teja
2017-02-24
Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less
Accessing the exceptional points of parity-time symmetric acoustics
Shi, Chengzhi; Dubois, Marc; Chen, Yun; Cheng, Lei; Ramezani, Hamidreza; Wang, Yuan; Zhang, Xiang
2016-01-01
Parity-time (PT) symmetric systems experience phase transition between PT exact and broken phases at exceptional point. These PT phase transitions contribute significantly to the design of single mode lasers, coherent perfect absorbers, isolators, and diodes. However, such exceptional points are extremely difficult to access in practice because of the dispersive behaviour of most loss and gain materials required in PT symmetric systems. Here we introduce a method to systematically tame these exceptional points and control PT phases. Our experimental demonstration hinges on an active acoustic element that realizes a complex-valued potential and simultaneously controls the multiple interference in the structure. The manipulation of exceptional points offers new routes to broaden applications for PT symmetric physics in acoustics, optics, microwaves and electronics, which are essential for sensing, communication and imaging. PMID:27025443
Sun, Chenglu; Li, Wei; Chen, Wei
2017-01-01
For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array. PMID:28796188
2013-01-01
Background Mobile technology offers the potential to deliver health-related interventions to individuals who would not otherwise present for in-person treatment. Text messaging (short message service, SMS), being the most ubiquitous form of mobile communication, is a promising method for reaching the most individuals. Objective The goal of the present study was to evaluate the feasibility and preliminary efficacy of a smoking cessation intervention program delivered through text messaging. Methods Adult participants (N=60, age range 18-52 years) took part in a single individual smoking cessation counseling session, and were then randomly assigned to receive either daily non-smoking related text messages (control condition) or the TXT-2-Quit (TXT) intervention. TXT consisted of automated smoking cessation messages tailored to individual’s stage of smoking cessation, specialized messages provided on-demand based on user requests for additional support, and a peer-to-peer social support network. Generalized estimating equation analysis was used to assess the primary outcome (7-day point-prevalence abstinence) using a 2 (treatment groups)×3 (time points) repeated measures design across three time points: 8 weeks, 3 months, and 6 months. Results Smoking cessation results showed an overall significant group difference in 7-day point prevalence abstinence across all follow-up time points. Individuals given the TXT intervention, with higher odds of 7-day point prevalence abstinence for the TXT group compared to the Mojo group (OR=4.52, 95% CI=1.24, 16.53). However, individual comparisons at each time point did not show significant between-group differences, likely due to reduced statistical power. Intervention feasibility was greatly improved by switching from traditional face-to-face recruitment methods (4.7% yield) to an online/remote strategy (41.7% yield). Conclusions Although this study was designed to develop and provide initial testing of the TXT-2-Quit system, these initial findings provide promising evidence that a text-based intervention can be successfully implemented with a diverse group of adult smokers. Trial Registration ClinicalTrials.gov: NCT01166464; http://clinicaltrials.gov/ct2/show/NCT01166464 (Archived by WebCite at http://www.webcitation.org/6IOE8XdE0). PMID:25098502
Fink, Ericka L; Berger, Rachel P; Clark, Robert SB; Watson, R. Scott; Angus, Derek C; Panigrahy, Ashok; Richichi, Rudolph; Callaway, Clifton W; Bell, Michael J; Mondello, Stefania; Hayes, Ronald L.; Kochanek, Patrick M
2016-01-01
Introduction Brain injury is the leading cause of morbidity and death following pediatric cardiac arrest. Serum biomarkers of brain injury may assist in outcome prognostication. The objectives of this study were to evaluate the properties of serum ubiquitin carboxyl-terminal esterase-L1 (UCH-L1) and glial fibrillary acidic protein (GFAP) to classify outcome in pediatric cardiac arrest. Methods Single center prospective study. Serum biomarkers were measured at 2 time points during the initial 72 h in children after cardiac arrest (n=19) and once in healthy children (controls, n=43). We recorded demographics and details of the cardiac arrest and resuscitation. We determined the associations between serum biomarker concentrations and Pediatric Cerebral Performance Category (PCPC) at 6 months (favorable (PCPC 1–3) or unfavorable (PCPC 4–6)). Results The initial assessment (time point 1) occurred at a median (IQR) of 10.5 (5.5–17.0) h and the second assessment (time point 2) at 59.0 (54.5–65.0) h post-cardiac arrest. Serum UCH-L1 was higher among children following cardiac arrest than among controls at both time points (p<0.05). Serum GFAP in subjects with unfavorable outcome was higher at time point 2 than in controls (p<0.05). Serum UCH-L1 at time point 1 (AUC 0.782) and both UCH-L1 and GFAP at time point 2 had good classification accuracy for outcome (AUC 0.822 and 0.796), p<0.05 for all. Conclusion Preliminary data suggest that serum UCH-L1 and GFAP may be of use to prognosticate outcome after pediatric cardiac arrest at clinically-relevant time points and should be validated prospectively. PMID:26855294
Micro-vibration detection with heterodyne holography based on time-averaged method
NASA Astrophysics Data System (ADS)
Qin, XiaoDong; Pan, Feng; Chen, ZongHui; Hou, XueQin; Xiao, Wen
2017-02-01
We propose a micro-vibration detection method by introducing heterodyne interferometry to time-averaged holography. This method compensates for the deficiency of time-average holography in quantitative measurements and widens its range of application effectively. Acousto-optic modulators are used to modulate the frequencies of the reference beam and the object beam. Accurate detection of the maximum amplitude of each point in the vibration plane is performed by altering the frequency difference of both beams. The range of amplitude detection of plane vibration is extended. In the stable vibration mode, the distribution of the maximum amplitude of each point is measured and the fitted curves are plotted. Hence the plane vibration mode of the object is demonstrated intuitively and detected quantitatively. We analyzed the method in theory and built an experimental system with a sine signal as the excitation source and a typical piezoelectric ceramic plate as the target. The experimental results indicate that, within a certain error range, the detected vibration mode agrees with the intrinsic vibration characteristics of the object, thus proving the validity of this method.
Evaluation of Pseudo-Haptic Interactions with Soft Objects in Virtual Environments.
Li, Min; Sareh, Sina; Xu, Guanghua; Ridzuan, Maisarah Binti; Luo, Shan; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar
2016-01-01
This paper proposes a pseudo-haptic feedback method conveying simulated soft surface stiffness information through a visual interface. The method exploits a combination of two feedback techniques, namely visual feedback of soft surface deformation and control of the indenter avatar speed, to convey stiffness information of a simulated surface of a soft object in virtual environments. The proposed method was effective in distinguishing different sizes of virtual hard nodules integrated into the simulated soft bodies. To further improve the interactive experience, the approach was extended creating a multi-point pseudo-haptic feedback system. A comparison with regards to (a) nodule detection sensitivity and (b) elapsed time as performance indicators in hard nodule detection experiments to a tablet computer incorporating vibration feedback was conducted. The multi-point pseudo-haptic interaction is shown to be more time-efficient than the single-point pseudo-haptic interaction. It is noted that multi-point pseudo-haptic feedback performs similarly well when compared to a vibration-based feedback method based on both performance measures elapsed time and nodule detection sensitivity. This proves that the proposed method can be used to convey detailed haptic information for virtual environmental tasks, even subtle ones, using either a computer mouse or a pressure sensitive device as an input device. This pseudo-haptic feedback method provides an opportunity for low-cost simulation of objects with soft surfaces and hard inclusions, as, for example, occurring in ever more realistic video games with increasing emphasis on interaction with the physical environment and minimally invasive surgery in the form of soft tissue organs with embedded cancer nodules. Hence, the method can be used in many low-budget applications where haptic sensation is required, such as surgeon training or video games, either using desktop computers or portable devices, showing reasonably high fidelity in conveying stiffness perception to the user.
NASA Astrophysics Data System (ADS)
Saberi, Elaheh; Reza Hejazi, S.
2018-02-01
In the present paper, Lie point symmetries of the time-fractional generalized Hirota-Satsuma coupled KdV (HS-cKdV) system based on the Riemann-Liouville derivative are obtained. Using the derived Lie point symmetries, we obtain similarity reductions and conservation laws of the considered system. Finally, some analytic solutions are furnished by means of the invariant subspace method in the Caputo sense.
Yu, Yifei; Luo, Linqing; Li, Bo; Guo, Linfeng; Yan, Jize; Soga, Kenichi
2015-10-01
The measured distance error caused by double peaks in the BOTDRs (Brillouin optical time domain reflectometers) system is a kind of Brillouin scattering spectrum (BSS) deformation, discussed and simulated for the first time in the paper, to the best of the authors' knowledge. Double peak, as a kind of Brillouin spectrum deformation, is important in the enhancement of spatial resolution, measurement accuracy, and crack detection. Due to the variances of the peak powers of the BSS along the fiber, the measured starting point of a step-shape frequency transition region is shifted and results in distance errors. Zero-padded short-time-Fourier-transform (STFT) can restore the transition-induced double peaks in the asymmetric and deformed BSS, thus offering more accurate and quicker measurements than the conventional Lorentz-fitting method. The recovering method based on the double-peak detection and corresponding BSS deformation can be applied to calculate the real starting point, which can improve the distance accuracy of the STFT-based BOTDR system.
Iterative algorithms for computing the feedback Nash equilibrium point for positive systems
NASA Astrophysics Data System (ADS)
Ivanov, I.; Imsland, Lars; Bogdanova, B.
2017-03-01
The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.
Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images
Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun
2013-01-01
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608
How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods
Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José
2015-01-01
The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared. PMID:26413547
Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José
2015-01-01
The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared.
On-line noninvasive one-point measurements of pulse wave velocity.
Harada, Akimitsu; Okada, Takashi; Niki, Kiyomi; Chang, Dehua; Sugawara, Motoaki
2002-12-01
Pulse wave velocity (PWV) is a basic parameter in the dynamics of pressure and flow waves traveling in arteries. Conventional on-line methods of measuring PWV have mainly been based on "two-point" measurements, i.e., measurements of the time of travel of the wave over a known distance. This paper describes two methods by which on-line "one-point" measurements can be made, and compares the results obtained by the two methods. The principle of one method is to measure blood pressure and velocity at a point, and use the water-hammer equation for forward traveling waves. The principle of the other method is to derive PWV from the stiffness parameter of the artery. Both methods were realized by using an ultrasonic system which we specially developed for noninvasive measurements of wave intensity. We applied the methods to the common carotid artery in 13 normal humans. The regression line of the PWV (m/s) obtained by the former method on the PWV (m/s) obtained by the latter method was y = 1.03x - 0.899 (R(2) = 0.83). Although regional PWV in the human carotid artery has not been reported so far, the correlation between the PWVs obtained by the present two methods was so high that we are convinced of the validity of these methods.
Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif
2017-05-01
Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.
Motion illusions in optical art presented for long durations are temporally distorted.
Nather, Francisco Carlos; Mecca, Fernando Figueiredo; Bueno, José Lino Oliveira
2013-01-01
Static figurative images implying human body movements observed for shorter and longer durations affect the perception of time. This study examined whether images of static geometric shapes would affect the perception of time. Undergraduate participants observed two Optical Art paintings by Bridget Riley for 9 or 36 s (group G9 and G36, respectively). Paintings implying different intensities of movement (2.0 and 6.0 point stimuli) were randomly presented. The prospective paradigm in the reproduction method was used to record time estimations. Data analysis did not show time distortions in the G9 group. In the G36 group the paintings were differently perceived: that for the 2.0 point one are estimated to be shorter than that for the 6.0 point one. Also for G36, the 2.0 point painting was underestimated in comparison with the actual time of exposure. Motion illusions in static images affected time estimation according to the attention given to the complexity of movement by the observer, probably leading to changes in the storage velocity of internal clock pulses.
Smoothed Particle Hydrodynamics: Applications Within DSTO
2006-10-01
Most SPH codes use either an improved Euler method (a mid-point predictor - corrector method) [50] or a leapfrog predictor - corrector algorithm for...in the next section we used the predictor - corrector leapfrog algorithm for time stepping. If we write the set of equations describing the change in... predictor - corrector or leapfrog method is used when solving the equations. Monaghan has also noted [53] that, with a correctly chosen time step, total
TaqMan based real time PCR assay targeting EML4-ALK fusion transcripts in NSCLC.
Robesova, Blanka; Bajerova, Monika; Liskova, Kvetoslava; Skrickova, Jana; Tomiskova, Marcela; Pospisilova, Sarka; Mayer, Jiri; Dvorakova, Dana
2014-07-01
Lung cancer with the ALK rearrangement constitutes only a small fraction of patients with non-small cell lung cancer (NSCLC). However, in the era of molecular-targeted therapy, efficient patient selection is crucial for successful treatment. In this context, an effective method for EML4-ALK detection is necessary. We developed a new highly sensitive variant specific TaqMan based real time PCR assay applicable to RNA from formalin-fixed paraffin-embedded tissue (FFPE). This assay was used to analyze the EML4-ALK gene in 96 non-selected NSCLC specimens and compared with two other methods (end-point PCR and break-apart FISH). EML4-ALK was detected in 33/96 (34%) specimens using variant specific real time PCR, whereas in only 23/96 (24%) using end-point PCR. All real time PCR positive samples were confirmed with direct sequencing. A total of 46 specimens were subsequently analyzed by all three detection methods. Using variant specific real time PCR we identified EML4-ALK transcript in 17/46 (37%) specimens, using end-point PCR in 13/46 (28%) specimens and positive ALK rearrangement by FISH was detected in 8/46 (17.4%) specimens. Moreover, using variant specific real time PCR, 5 specimens showed more than one EML4-ALK variant simultaneously (in 2 cases the variants 1+3a+3b, in 2 specimens the variants 1+3a and in 1 specimen the variant 1+3b). In one case of 96 EML4-ALK fusion gene and EGFR mutation were detected. All simultaneous genetic variants were confirmed using end-point PCR and direct sequencing. Our variant specific real time PCR assay is highly sensitive, fast, financially acceptable, applicable to FFPE and seems to be a valuable tool for the rapid prescreening of NSCLC patients in clinical practice, so, that most patients able to benefit from targeted therapy could be identified. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Dorn, Lorah D.; Sontag-Padilla, Lisa M.; Pabst, Stephanie; Tissot, Abbigail; Susman, Elizabeth J.
2013-01-01
Age at menarche is critical in research and clinical settings, yet there is a dearth of studies examining its reliability in adolescents. We examined age at menarche during adolescence, specifically, (a) average method reliability across 3 years, (b) test-retest reliability between time points and methods, (c) intraindividual variability of…
NASA Astrophysics Data System (ADS)
De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan
2016-11-01
A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.
Patel, Nitesh V; Sundararajan, Sri; Keller, Irwin; Danish, Shabbar
2018-01-01
Objective: Magnetic resonance (MR)-guided stereotactic laser amygdalohippocampectomy is a minimally invasive procedure for the treatment of refractory epilepsy in patients with mesial temporal sclerosis. Limited data exist on post-ablation volumetric trends associated with the procedure. Methods: 10 patients with mesial temporal sclerosis underwent MR-guided stereotactic laser amygdalohippocampectomy. Three independent raters computed ablation volumes at the following time points: pre-ablation (PreA), immediate post-ablation (IPA), 24 hours post-ablation (24PA), first follow-up post-ablation (FPA), and greater than three months follow-up post-ablation (>3MPA), using OsiriX DICOM Viewer (Pixmeo, Bernex, Switzerland). Statistical trends in post-ablation volumes were determined for the time points. Results: MR-guided stereotactic laser amygdalohippocampectomy produces a rapid rise and distinct peak in post-ablation volume immediately following the procedure. IPA volumes are significantly higher than all other time points. Comparing individual time points within each raters dataset (intra-rater), a significant difference was seen between the IPA time point and all others. There was no statistical difference between the 24PA, FPA, and >3MPA time points. A correlation analysis demonstrated the strongest correlations at the 24PA (r=0.97), FPA (r=0.95), and 3MPA time points (r=0.99), with a weaker correlation at IPA (r=0.92). Conclusion: MR-guided stereotactic laser amygdalohippocampectomy produces a maximal increase in post-ablation volume immediately following the procedure, which decreases and stabilizes at 24 hours post-procedure and beyond three months follow-up. Based on the correlation analysis, the lower inter-rater reliability at the IPA time point suggests it may be less accurate to assess volume at this time point. We recommend post-ablation volume assessments be made at least 24 hours post-selective ablation of the amygdalohippocampal complex (SLAH).
Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng
2017-01-01
Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.
Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve
NASA Astrophysics Data System (ADS)
Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.
2009-04-01
Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.
a Modeling Method of Fluttering Leaves Based on Point Cloud
NASA Astrophysics Data System (ADS)
Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.
2017-09-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
Gambling scores for earthquake predictions and forecasts
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang
2010-04-01
This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
Chirp-Z analysis for sol-gel transition monitoring.
Martinez, Loïc; Caplain, Emmanuel; Serfaty, Stéphane; Griesmar, Pascal; Gouedard, Gérard; Gindre, Marcel
2004-04-01
Gelation is a complex reaction that transforms a liquid medium into a solid one: the gel. In gel state, some gel materials (DMAP) have the singular property to ring in an audible frequency range when a pulse is applied. Before the gelation point, there is no transmission of slow waves observed; after the gelation point, the speed of sound in the gel rapidly increases from 0.1 to 10 m/s. The time evolution of the speed of sound can be measured, in frequency domain, by following the frequency spacing of the resonance peaks from the Synchronous Detection (SD) measurement method. Unfortunately, due to a constant frequency sampling rate, the relative error for low speeds (0.1 m/s) is 100%. In order to maintain a low constant relative error, in the whole speed time evolution range, Chirp-Z Transform (CZT) is used. This operation transforms a time variant signal to a time invariant one using only a time dependant stretching factor (S). In the frequency domain, the CZT enables us to stretch each collected spectrum from time signals. The blind identification of the S factor gives us the complete time evolution law of the speed of sound. Moreover, this method proves that the frequency bandwidth follows the same time law. These results point out that the minimum wavelength stays constant and that it only depends on the gel.
Modeling seasonal detection patterns for burrowing owl surveys
Quresh S. Latif; Kathleen D. Fleming; Cameron Barrows; John T. Rotenberry
2012-01-01
To guide monitoring of burrowing owls (Athene cunicularia) in the Coachella Valley, California, USA, we analyzed survey-method-specific seasonal variation in detectability. Point-based call-broadcast surveys yielded high early season detectability that then declined through time, whereas detectability on driving surveys increased through the season. Point surveys...
Options for Robust Airfoil Optimization under Uncertainty
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Li, Wu
2002-01-01
A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.
Magnus, Manya; Herwehe, Jane; Andrews, Laura; Gibson, Laura; Daigrepont, Nathan; De Leon, Jordana M; Hyslop, Newton E; Styron, Steven; Wilcox, Ronald; Kaiser, Michael; Butler, Michael K
2009-02-01
Health information technology (HIT) offers the potential to improve care for persons living with HIV. Provider satisfaction with HIT is essential to realize benefits, yet its evaluation presents challenges. An HIV-specific, electronic clinical management and reporting system was implemented in Louisiana's eight HIV clinics, serving over 7500. A serial cross-sectional survey was administered at three points between April 2002 and July 2005; qualitative methods were used to augment quantitative. Multivariable methods were used to characterize provider satisfaction. The majority of the sample (n = 196; T1 = 105; T2 = 46; T3 = 45) was female (80.0%), between ages of 25 and 50 years (68.3%), frequent providers at that clinic (53.7% more than 4 days per week), and had been at the same clinic for a year or more (85.0%). Improvements in satisfaction were observed in patient tracking ( p < 0.05), distribution of educational materials ( p < 0.04), and belief that electronic systems improve care ( p < 0.05). Provider self-reports of time to complete critical functions decreased for all tasks, two significantly so. Time (in minutes) to find current CD4 count decreased at each time point (mean 3.9 [standard deviation {SD} 5.8], 2.9 [2.3], 2.1 [2.6], p>0.05), current viral load decreased at each time point (mean 4.0 [SD 5.6], 2.9 [2.5], 1.8 [2.6], p = 0.08], current antiretroviral status decreased at each time point (mean 3.9 [SD 4.7], 2.9 [3.7], 1.5 [1.1], p < 0.04), history of antiretroviral use decreased at each time point (mean 15.1 [SD 21.9], 6.0 [5.4], 5.4 [7.2], p < 0.04]. Time savings were realized, averaging 16.1 minutes per visit ( p < 0.04). Providers were satisfied with HIT in multiple domains, and significant time savings were realized.
Luo, Xiongbiao; Wan, Ying; He, Xiangjian; Mori, Kensaku
2015-02-01
Registration of pre-clinical images to physical space is indispensable for computer-assisted endoscopic interventions in operating rooms. Electromagnetically navigated endoscopic interventions are increasingly performed at current diagnoses and treatments. Such interventions use an electromagnetic tracker with a miniature sensor that is usually attached at an endoscope distal tip to real time track endoscope movements in a pre-clinical image space. Spatial alignment between the electromagnetic tracker (or sensor) and pre-clinical images must be performed to navigate the endoscope to target regions. This paper proposes an adaptive marker-free registration method that uses a multiple point selection strategy. This method seeks to address an assumption that the endoscope is operated along the centerline of an intraluminal organ which is easily violated during interventions. We introduce an adaptive strategy that generates multiple points in terms of sensor measurements and endoscope tip center calibration. From these generated points, we adaptively choose the optimal point, which is the closest to its assigned the centerline of the hollow organ, to perform registration. The experimental results demonstrate that our proposed adaptive strategy significantly reduced the target registration error from 5.32 to 2.59 mm in static phantoms validation, as well as from at least 7.58 mm to 4.71 mm in dynamic phantom validation compared to current available methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
a Method of Time-Series Change Detection Using Full Polsar Images from Different Sensors
NASA Astrophysics Data System (ADS)
Liu, W.; Yang, J.; Zhao, J.; Shi, H.; Yang, L.
2018-04-01
Most of the existing change detection methods using full polarimetric synthetic aperture radar (PolSAR) are limited to detecting change between two points in time. In this paper, a novel method was proposed to detect the change based on time-series data from different sensors. Firstly, the overall difference image of a time-series PolSAR was calculated by ominous statistic test. Secondly, difference images between any two images in different times ware acquired by Rj statistic test. Generalized Gaussian mixture model (GGMM) was used to obtain time-series change detection maps in the last step for the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection by using the time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can detect the time-series change from different sensors.
NASA Astrophysics Data System (ADS)
Fitzpatrick, Matthew R. C.; Kennett, Malcolm P.
2018-05-01
We develop a formalism that allows the study of correlations in space and time in both the superfluid and Mott insulating phases of the Bose-Hubbard Model. Specifically, we obtain a two particle irreducible effective action within the contour-time formalism that allows for both equilibrium and out of equilibrium phenomena. We derive equations of motion for both the superfluid order parameter and two-point correlation functions. To assess the accuracy of this formalism, we study the equilibrium solution of the equations of motion and compare our results to existing strong coupling methods as well as exact methods where possible. We discuss applications of this formalism to out of equilibrium situations.
Method of noncontacting ultrasonic process monitoring
Garcia, Gabriel V.; Walter, John B.; Telschow, Kenneth L.
1992-01-01
A method of monitoring a material during processing comprising the steps of (a) shining a detection light on the surface of a material; (b) generating ultrasonic waves at the surface of the material to cause a change in frequency of the detection light; (c) detecting a change in the frequency of the detection light at the surface of the material; (d) detecting said ultrasonic waves at the surface point of detection of the material; (e) measuring a change in the time elapsed from generating the ultrasonic waves at the surface of the material and return to the surface point of detection of the material, to determine the transit time; and (f) comparing the transit time to predetermined values to determine properties such as, density and the elastic quality of the material.
Khare, Rahul; Sala, Guillaume; Kinahan, Paul; Esposito, Giuseppe; Banovac, Filip; Cleary, Kevin; Enquobahrie, Andinet
2013-01-01
Positron emission tomography computed tomography (PET-CT) images are increasingly being used for guidance during percutaneous biopsy. However, due to the physics of image acquisition, PET-CT images are susceptible to problems due to respiratory and cardiac motion, leading to inaccurate tumor localization, shape distortion, and attenuation correction. To address these problems, we present a method for motion correction that relies on respiratory gated CT images aligned using a deformable registration algorithm. In this work, we use two deformable registration algorithms and two optimization approaches for registering the CT images obtained over the respiratory cycle. The two algorithms are the BSpline and the symmetric forces Demons registration. In the first optmization approach, CT images at each time point are registered to a single reference time point. In the second approach, deformation maps are obtained to align each CT time point with its adjacent time point. These deformations are then composed to find the deformation with respect to a reference time point. We evaluate these two algorithms and optimization approaches using respiratory gated CT images obtained from 7 patients. Our results show that overall the BSpline registration algorithm with the reference optimization approach gives the best results.
An analysis of neural receptive field plasticity by point process adaptive filtering
Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor
2001-01-01
Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043
Bellera, C A; Penel, N; Ouali, M; Bonvalot, S; Casali, P G; Nielsen, O S; Delannes, M; Litière, S; Bonnetain, F; Dabakuyo, T S; Benjamin, R S; Blay, J-Y; Bui, B N; Collin, F; Delaney, T F; Duffaud, F; Filleron, T; Fiore, M; Gelderblom, H; George, S; Grimer, R; Grosclaude, P; Gronchi, A; Haas, R; Hohenberger, P; Issels, R; Italiano, A; Jooste, V; Krarup-Hansen, A; Le Péchoux, C; Mussi, C; Oberlin, O; Patel, S; Piperno-Neumann, S; Raut, C; Ray-Coquard, I; Rutkowski, P; Schuetze, S; Sleijfer, S; Stoeckle, E; Van Glabbeke, M; Woll, P; Gourgou-Bourgade, S; Mathoulin-Pélissier, S
2015-05-01
The use of potential surrogate end points for overall survival, such as disease-free survival (DFS) or time-to-treatment failure (TTF) is increasingly common in randomized controlled trials (RCTs) in cancer. However, the definition of time-to-event (TTE) end points is rarely precise and lacks uniformity across trials. End point definition can impact trial results by affecting estimation of treatment effect and statistical power. The DATECAN initiative (Definition for the Assessment of Time-to-event End points in CANcer trials) aims to provide recommendations for definitions of TTE end points. We report guidelines for RCT in sarcomas and gastrointestinal stromal tumors (GIST). We first carried out a literature review to identify TTE end points (primary or secondary) reported in publications of RCT. An international multidisciplinary panel of experts proposed recommendations for the definitions of these end points. Recommendations were developed through a validated consensus method formalizing the degree of agreement among experts. Recommended guidelines for the definition of TTE end points commonly used in RCT for sarcomas and GIST are provided for adjuvant and metastatic settings, including DFS, TTF, time to progression and others. Use of standardized definitions should facilitate comparison of trials' results, and improve the quality of trial design and reporting. These guidelines could be of particular interest to research scientists involved in the design, conduct, reporting or assessment of RCT such as investigators, statisticians, reviewers, editors or regulatory authorities. © The Author 2014. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method 010403, GeneQuence Listeria Test (DNAH method), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C, and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there were statistically significant differences in method performance between the DNAH method and reference culture procedures for only 2 foods (pasteurized crab meat and lettuce) at the 27 h enrichment time point and for only a single food (pasteurized crab meat) in one trial at the 30 h enrichment time point. Independent laboratory testing with 3 foods showed statistical equivalence between the methods for all foods, and results support the findings of the internal trials. Overall, considering both internal and independent laboratory trials, sensitivity of the DNAH method relative to the reference culture procedures was 90.5%. Results of testing 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the DNAH method was more productive than the reference U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the DNAH method at the 24 h time point. Overall, sensitivity of the DNAH method at 24 h relative to that of the USDA-FSIS method was 152%. The DNAH method exhibited extremely high specificity, with only 1% false-positive reactions overall.
Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...
2015-09-01
The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less
Unsteady three-dimensional thermal field prediction in turbine blades using nonlinear BEM
NASA Technical Reports Server (NTRS)
Martin, Thomas J.; Dulikravich, George S.
1993-01-01
A time-and-space accurate and computationally efficient fully three dimensional unsteady temperature field analysis computer code has been developed for truly arbitrary configurations. It uses boundary element method (BEM) formulation based on an unsteady Green's function approach, multi-point Gaussian quadrature spatial integration on each panel, and a highly clustered time-step integration. The code accepts either temperatures or heat fluxes as boundary conditions that can vary in time on a point-by-point basis. Comparisons of the BEM numerical results and known analytical unsteady results for simple shapes demonstrate very high accuracy and reliability of the algorithm. An example of computed three dimensional temperature and heat flux fields in a realistically shaped internally cooled turbine blade is also discussed.
NASA Astrophysics Data System (ADS)
Madokoro, H.; Tsukada, M.; Sato, K.
2013-07-01
This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.
Lacerda, Vánia A; Pereira, Leandro O; Hirata JUNIOR, Raphael; Perez, Cesar R
2015-12-01
To evaluate the effectiveness of disinfection/sterilization methods and their effects on polishing capacity, micomorphology, and composition of two different composite fiishing and polishing instruments. Two brands of finishing and polishing instruments (Jiffy and Optimize), were analyzed. For the antimicrobial test, 60 points (30 of each brand) were used for polishing composite restorations and submitted to three different groups of disinfection/sterilization methods: none (control), autoclaving, and immersion in peracetic acid for 60 minutes. The in vitro tests were performed to evaluate the polishing performance on resin composite disks (Amelogen) using a 3D scanner (Talyscan) and to evaluate the effects on the points' surface composition (XRF) and micromorphology (MEV) after completing a polishing and sterilizing routine five times. Both sterilization/disinfection methods were efficient against oral cultivable organisms and no deleterious modification was observed to point surface.
Improving energy efficiency in handheld biometric applications
NASA Astrophysics Data System (ADS)
Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.
2012-06-01
With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.
a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments
NASA Astrophysics Data System (ADS)
Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.
2017-09-01
We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.
Crossfit analysis: a novel method to characterize the dynamics of induced plant responses.
Jansen, Jeroen J; van Dam, Nicole M; Hoefsloot, Huub C J; Smilde, Age K
2009-12-16
Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples.
Crossfit analysis: a novel method to characterize the dynamics of induced plant responses
2009-01-01
Background Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. Results This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Conclusions Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples. PMID:20015363
NASA Astrophysics Data System (ADS)
Ito, Mika; Arakawa, Mototaka; Kanai, Hiroshi
2018-07-01
Pulse wave velocity (PWV) is used as a diagnostic criterion for arteriosclerosis, a major cause of heart disease and cerebrovascular disease. However, there are several problems with conventional PWV measurement techniques. One is that a pulse wave is assumed to only have an incident component propagating at a constant speed from the heart to the femoral artery, and another is that PWV is only determined from a characteristic time such as the rise time of the blood pressure waveform. In this study, we noninvasively measured the velocity waveform of small vibrations at multiple points on the carotid arterial wall using ultrasound. Local PWV was determined by analyzing the phase component of the velocity waveform by the least squares method. This method allowed measurement of the time change of the PWV at approximately the arrival time of the pulse wave, which discriminates the period when the reflected component is not contaminated.
Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl
2014-01-01
Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420
Rahman, Nafisur; Kashif, Mohammad
2010-03-01
Point and interval hypothesis tests performed to validate two simple and economical, kinetic spectrophotometric methods for the assay of lansoprazole are described. The methods are based on the formation of chelate complex of the drug with Fe(III) and Zn(II). The reaction is followed spectrophotometrically by measuring the rate of change of absorbance of coloured chelates of the drug with Fe(III) and Zn(II) at 445 and 510 nm, respectively. The stoichiometric ratio of lansoprazole to Fe(III) and Zn(II) complexes were found to be 1:1 and 2:1, respectively. The initial-rate and fixed-time methods are adopted for determination of drug concentrations. The calibration graphs are linear in the range 50-200 µg ml⁻¹ (initial-rate method), 20-180 µg ml⁻¹ (fixed-time method) for lansoprazole-Fe(III) complex and 120-300 (initial-rate method), and 90-210 µg ml⁻¹ (fixed-time method) for lansoprazole-Zn(II) complex. The inter-day and intra-day precision data showed good accuracy and precision of the proposed procedure for analysis of lansoprazole. The point and interval hypothesis tests indicate that the proposed procedures are not biased. Copyright © 2010 John Wiley & Sons, Ltd.
Baltierra, Nina B; Muessig, Kathryn E; Pike, Emily C; LeGrand, Sara; Bull, Sheana S; Hightow-Weidman, Lisa B
2016-02-01
There has been a rise in internet-based health interventions without a concomitant focus on new methods to measure user engagement and its effect on outcomes. We describe current user tracking methods for internet-based health interventions and offer suggestions for improvement based on the design and pilot testing of healthMpowerment.org (HMP). HMP is a multi-component online intervention for young Black men and transgender women who have sex with men (YBMSM/TW) to reduce risky sexual behaviors, promote healthy living and build social support. The intervention is non-directive, incorporates interactive features, and utilizes a point-based reward system. Fifteen YBMSM/TW (age 20-30) participated in a one-month pilot study to test the usability and efficacy of HMP. Engagement with the intervention was tracked using a customized data capture system and validated with Google Analytics. Usage was measured in time spent (total and across sections) and points earned. Average total time spent on HMP was five hours per person (range 0-13). Total time spent was correlated with total points earned and overall site satisfaction. Measuring engagement in internet-based interventions is crucial to determining efficacy. Multiple methods of tracking helped derive more comprehensive user profiles. Results highlighted the limitations of measures to capture user activity and the elusiveness of the concept of engagement. Copyright © 2016 Elsevier Inc. All rights reserved.
Song, Min-Ho; Choi, Jung-Woo; Kim, Yang-Hann
2012-02-01
A focused source can provide an auditory illusion of a virtual source placed between the loudspeaker array and the listener. When a focused source is generated by time-reversed acoustic focusing solution, its use as a virtual source is limited due to artifacts caused by convergent waves traveling towards the focusing point. This paper proposes an array activation method to reduce the artifacts for a selected listening point inside an array of arbitrary shape. Results show that energy of convergent waves can be reduced up to 60 dB for a large region including the selected listening point. © 2012 Acoustical Society of America
Lando, Asiyanthi Tabran; Nakayama, Hirofumi; Shimaoka, Takayuki
2017-01-01
Methane from landfills contributes to global warming and can pose an explosion hazard. To minimize these effects emissions must be monitored. This study proposed application of portable gas detector (PGD) in point and scanning measurements to estimate spatial distribution of methane emissions in landfills. The aims of this study were to discover the advantages and disadvantages of point and scanning methods in measuring methane concentrations, discover spatial distribution of methane emissions, cognize the correlation between ambient methane concentration and methane flux, and estimate methane flux and emissions in landfills. This study was carried out in Tamangapa landfill, Makassar city-Indonesia. Measurement areas were divided into basic and expanded area. In the point method, PGD was held one meter above the landfill surface, whereas scanning method used a PGD with a data logger mounted on a wire drawn between two poles. Point method was efficient in time, only needed one person and eight minutes in measuring 400m 2 areas, whereas scanning method could capture a lot of hot spots location and needed 20min. The results from basic area showed that ambient methane concentration and flux had a significant (p<0.01) positive correlation with R 2 =0.7109 and y=0.1544 x. This correlation equation was used to describe spatial distribution of methane emissions in the expanded area by using Kriging method. The average of estimated flux from scanning method was 71.2gm -2 d -1 higher than 38.3gm -2 d -1 from point method. Further, scanning method could capture the lower and higher value, which could be useful to evaluate and estimate the possible effects of the uncontrolled emissions in landfill. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kessel, Sarah; Cribbes, Scott; Bonasu, Surekha; Rice, William; Qiu, Jean; Chan, Leo Li-Ying
2017-09-01
The development of three-dimensional (3D) multicellular tumor spheroid models for cancer drug discovery research has increased in the recent years. The use of 3D tumor spheroid models may be more representative of the complex in vivo tumor microenvironments in comparison to two-dimensional (2D) assays. Currently, viability of 3D multicellular tumor spheroids has been commonly measured on standard plate-readers using metabolic reagents such as CellTiter-Glo® for end point analysis. Alternatively, high content image cytometers have been used to measure drug effects on spheroid size and viability. Previously, we have demonstrated a novel end point drug screening method for 3D multicellular tumor spheroids using the Celigo Image Cytometer. To better characterize the cancer drug effects, it is important to also measure the kinetic cytotoxic and apoptotic effects on 3D multicellular tumor spheroids. In this work, we demonstrate the use of PI and caspase 3/7 stains to measure viability and apoptosis for 3D multicellular tumor spheroids in real-time. The method was first validated by staining different types of tumor spheroids with PI and caspase 3/7 and monitoring the fluorescent intensities for 16 and 21 days. Next, PI-stained and nonstained control tumor spheroids were digested into single cell suspension to directly measure viability in a 2D assay to determine the potential toxicity of PI. Finally, extensive data analysis was performed on correlating the time-dependent PI and caspase 3/7 fluorescent intensities to the spheroid size and necrotic core formation to determine an optimal starting time point for cancer drug testing. The ability to measure real-time viability and apoptosis is highly important for developing a proper 3D model for screening tumor spheroids, which can allow researchers to determine time-dependent drug effects that usually are not captured by end point assays. This would improve the current tumor spheroid analysis method to potentially better identify more qualified cancer drug candidates for drug discovery research. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Cesarean section using the Misgav Ladach method.
Federici, D; Lacelli, B; Muggiasca, L; Agarossi, A; Cipolla, L; Conti, M
1997-06-01
To stress the advantages of the Misgav Ladach method for cesarean section. In this study operative details and the postoperative course of 139 patients who underwent cesarean section according to the Misgav Ladach method in 1995-96 are presented. The Misgav Ladach method reduces operation time, time of child delivery, and time of recovery. The rates of febrile morbidity, wound infection and wound dehiscence are not affected by the new technique. Our study highlights the efficiency and safety of the Misgav Ladach method, and points out the speeded recovery, with early ambulation and resumption of drinking and eating, that makes the cesarean section delivery closer and closer to natural childbirth.
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
NASA Astrophysics Data System (ADS)
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
Genetic algorithms in teaching artificial intelligence (automated generation of specific algebras)
NASA Astrophysics Data System (ADS)
Habiballa, Hashim; Jendryscik, Radek
2017-11-01
The problem of teaching essential Artificial Intelligence (AI) methods is an important task for an educator in the branch of soft-computing. The key focus is often given to proper understanding of the principle of AI methods in two essential points - why we use soft-computing methods at all and how we apply these methods to generate reasonable results in sensible time. We present one interesting problem solved in the non-educational research concerning automated generation of specific algebras in the huge search space. We emphasize above mentioned points as an educational case study of an interesting problem in automated generation of specific algebras.
NASA Astrophysics Data System (ADS)
Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.
2018-06-01
The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
On the improvement of blood sample collection at clinical laboratories
2014-01-01
Background Blood samples are usually collected daily from different collection points, such hospitals and health centers, and transported to a core laboratory for testing. This paper presents a project to improve the collection routes of two of the largest clinical laboratories in Spain. These routes must be designed in a cost-efficient manner while satisfying two important constraints: (i) two-hour time windows between collection and delivery, and (ii) vehicle capacity. Methods A heuristic method based on a genetic algorithm has been designed to solve the problem of blood sample collection. The user enters the following information for each collection point: postal address, average collecting time, and average demand (in thermal containers). After implementing the algorithm using C programming, this is run and, in few seconds, it obtains optimal (or near-optimal) collection routes that specify the collection sequence for each vehicle. Different scenarios using various types of vehicles have been considered. Unless new collection points are added or problem parameters are changed substantially, routes need to be designed only once. Results The two laboratories in this study previously planned routes manually for 43 and 74 collection points, respectively. These routes were covered by an external carrier company. With the implementation of this algorithm, the number of routes could be reduced from ten to seven in one laboratory and from twelve to nine in the other, which represents significant annual savings in transportation costs. Conclusions The algorithm presented can be easily implemented in other laboratories that face this type of problem, and it is particularly interesting and useful as the number of collection points increases. The method designs blood collection routes with reduced costs that meet the time and capacity constraints of the problem. PMID:24406140
Comparative study on novel test systems to determine disintegration time of orodispersible films.
Preis, Maren; Gronkowsky, Dorothee; Grytzan, Dominik; Breitkreutz, Jörg
2014-08-01
Orodispersible films (ODFs) are a promising innovative dosage form enabling drug administration without the need for water and minimizing danger of aspiration due to their fast disintegration in small amounts of liquid. This study focuses on the development of a disintegration test system for ODFs. Two systems were developed and investigated: one provides an electronic end-point, and the other shows a transferable setup of the existing disintegration tester for orodispersible tablets. Different ODF preparations were investigated to determine the suitability of the disintegration test systems. The use of different test media and the impact of different storage conditions of ODFs on their disintegration time were additionally investigated. The experiments showed acceptable reproducibility (low deviations within sample replicates due to a clear determination of the measurement end-point). High temperatures and high humidity affected some of the investigated ODFs, resulting in higher disintegration time or even no disintegration within the tested time period. The methods provided clear end-point detection and were applicable for different types of ODFs. By the modification of a conventional test system to enable application for films, a standard method could be presented to ensure uniformity in current quality control settings. © 2014 Royal Pharmaceutical Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biasotto, G.; Simoes, A.Z., E-mail: alezipo@yahoo.com; Foschini, C.R.
Highlights: Black-Right-Pointing-Pointer BiFeO{sub 3} (BFO) nanoparticles were grown by hydrothermal microwave method (HTMW). Black-Right-Pointing-Pointer The soaking time is effective in improving phase formation. Black-Right-Pointing-Pointer Rietveld refinement reveals an orthorhombic structure. Black-Right-Pointing-Pointer The observed magnetism of the BFO crystallites is a consequence of particle size. Black-Right-Pointing-Pointer The HTMW is a genuine technique for low temperatures and short times of synthesis. -- Abstract: Hydrothermal microwave method (HTMW) was used to synthesize crystalline bismuth ferrite (BiFeO{sub 3}) nanoparticles (BFO) in the temperature of 180 Degree-Sign C with times ranging from 5 min to 1 h. BFO nanoparticles were characterized by means of X-raymore » analyses, FT-IR, Raman spectroscopy, TG-DTA and FE-SEM. X-ray diffraction results indicated that longer soaking time was benefit to refraining the formation of any impurity phases and growing BFO crystallites into almost single-phase perovskites. Typical FT-IR spectra for BFO nanoparticles presented well defined bands, indicating a substantial short-range order in the system. TG-DTA analyses confirmed the presence of lattice OH{sup -} groups, commonly found in materials obtained by HTMW process. Compared with the conventional solid-state reaction process, submicron BFO crystallites with better homogeneity could be produced at the temperature as low as 180 Degree-Sign C. These results show that the HTMW synthesis route is rapid, cost effective, and could be used as an alternative to obtain BFO nanoparticles in the temperature of 180 Degree-Sign C for 1 h.« less
Brownian dynamics of confined rigid bodies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delong, Steven; Balboa Usabiaga, Florencio; Donev, Aleksandar, E-mail: donev@courant.nyu.edu
2015-10-14
We introduce numerical methods for simulating the diffusive motion of rigid bodies of arbitrary shape immersed in a viscous fluid. We parameterize the orientation of the bodies using normalized quaternions, which are numerically robust, space efficient, and easy to accumulate. We construct a system of overdamped Langevin equations in the quaternion representation that accounts for hydrodynamic effects, preserves the unit-norm constraint on the quaternion, and is time reversible with respect to the Gibbs-Boltzmann distribution at equilibrium. We introduce two schemes for temporal integration of the overdamped Langevin equations of motion, one based on the Fixman midpoint method and the othermore » based on a random finite difference approach, both of which ensure that the correct stochastic drift term is captured in a computationally efficient way. We study several examples of rigid colloidal particles diffusing near a no-slip boundary and demonstrate the importance of the choice of tracking point on the measured translational mean square displacement (MSD). We examine the average short-time as well as the long-time quasi-two-dimensional diffusion coefficient of a rigid particle sedimented near a bottom wall due to gravity. For several particle shapes, we find a choice of tracking point that makes the MSD essentially linear with time, allowing us to estimate the long-time diffusion coefficient efficiently using a Monte Carlo method. However, in general, such a special choice of tracking point does not exist, and numerical techniques for simulating long trajectories, such as the ones we introduce here, are necessary to study diffusion on long time scales.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn
2015-10-15
Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors’ method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors’ method ranks 24 of 39. According to the index of the maximum shear stretch, the authors’ method is also efficient to describe the discontinuous motion at the lung boundaries. Conclusions: By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors’ method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.« less
Field test comparison of two dermal tolerance assessment methods of hand hygiene products.
Girard, R; Carré, E; Pires-Cronenberger, S; Bertin-Mandy, M; Favier-Bulit, M C; Coyault, C; Coudrais, S; Billard, M; Regard, A; Kerhoas, A; Valdeyron, M L; Cracco, B; Misslin, P
2008-06-01
This study aimed to compare the sensitivity and workload requirement of two dermal tolerance assessment methods of hand hygiene products, in order to select a suitable pilot testing method for field tests. An observer-rating method and a self-assessment method were compared in 12 voluntary hospital departments (autumn/winter of 2005-2006). Three test-periods of three weeks were separated by two-week intervals during which the routine products were reintroduced. The observer rating method scored dryness and irritation on four-point scales. In the self-assessment method, the user rated appearance, intactness, moisture content, and sensation on a visual analogue scale which was converted into a 10-point numerical scale. Eleven products (soaps) were tested (223/250 complete reports for observer rating, 131/251 for self-assessment). Two products were significantly less well tolerated than the routine product according to the observers, four products according to the self-assessments. There was no significant difference between the two methods when products were classified according to tolerance (Fisher's test: P=0.491). For the symptom common to both assessment methods (dryness), there is a good correlation between the two methods (Spearman's Rho: P=0.032). The workload was higher for observer rating method (288 h of observer time plus 122 h of prevention team and pharmacist time compared with 15 h of prevention team and pharmacist time for self-assessment). In conclusion, the self-assessment method was considered more suitable for pilot testing, although further time should be allocated for educational measures as the return rate of complete self-assessment forms was poor.
Urban Growth Detection Using Filtered Landsat Dense Time Trajectory in an Arid City
NASA Astrophysics Data System (ADS)
Ye, Z.; Schneider, A.
2014-12-01
Among all remote sensing environment monitoring techniques, time series analysis of biophysical index is drawing increasing attention. Although many of them studied forest disturbance and land cover change detection, few focused on urban growth mapping at medium spatial resolution. As Landsat archive becomes open accessible, methods using Landsat time-series imagery to detect urban growth is possible. It is found that a time trajectory from a newly developed urban area shows a dramatic drop of vegetation index. This enable the utilization of time trajectory analysis to distinguish impervious surface and crop land that has a different temporal biophysical pattern. Also, the time of change can be estimated, yet many challenges remain. Landsat data has lower temporal resolution, which may be worse when cloud-contaminated pixels and SLC-off effect exist. It is difficult to tease apart intra-annual, inter-annual, and land cover difference in a time series. Here, several methods of time trajectory analysis are utilized and compared to find a computationally efficient and accurate way on urban growth detection. A case study city, Ankara, Turkey is chosen for its arid climate and various landscape distributions. For preliminary research, Landsat TM and ETM+ scenes from 1998 to 2002 are chosen. NDVI, EVI, and SAVI are selected as research biophysical indices. The procedure starts with a seasonality filtering. Only areas with seasonality need to be filtered so as to decompose seasonality and extract overall trend. Harmonic transform, wavelet transform, and a pre-defined bell shape filter are used to estimate the overall trend in the time trajectory for each pixel. The point with significant drop in the trajectory is tagged as change point. After an urban change is detected, forward and backward checking is undertaken to make sure it is really new urban expansion other than short time crop fallow or forest disturbance. The method proposed here can capture most of the urban growth during research time period, although the accuracy of time point determination is a bit lower than this. Results from several biophysical indices and filtering methods are similar. Some fallows and bare lands in arid area are easily confused with urban impervious surface.
Welding deviation detection algorithm based on extremum of molten pool image contour
NASA Astrophysics Data System (ADS)
Zou, Yong; Jiang, Lipei; Li, Yunhua; Xue, Long; Huang, Junfen; Huang, Jiqiang
2016-01-01
The welding deviation detection is the basis of robotic tracking welding, but the on-line real-time measurement of welding deviation is still not well solved by the existing methods. There is plenty of information in the gas metal arc welding(GMAW) molten pool images that is very important for the control of welding seam tracking. The physical meaning for the curvature extremum of molten pool contour is revealed by researching the molten pool images, that is, the deviation information points of welding wire center and the molten tip center are the maxima and the local maxima of the contour curvature, and the horizontal welding deviation is the position difference of these two extremum points. A new method of weld deviation detection is presented, including the process of preprocessing molten pool images, extracting and segmenting the contours, obtaining the contour extremum points, and calculating the welding deviation, etc. Extracting the contours is the premise, segmenting the contour lines is the foundation, and obtaining the contour extremum points is the key. The contour images can be extracted with the method of discrete dyadic wavelet transform, which is divided into two sub contours including welding wire and molten tip separately. The curvature value of each point of the two sub contour lines is calculated based on the approximate curvature formula of multi-points for plane curve, and the two points of the curvature extremum are the characteristics needed for the welding deviation calculation. The results of the tests and analyses show that the maximum error of the obtained on-line welding deviation is 2 pixels(0.16 mm), and the algorithm is stable enough to meet the requirements of the pipeline in real-time control at a speed of less than 500 mm/min. The method can be applied to the on-line automatic welding deviation detection.
Feature-based registration of historical aerial images by Area Minimization
NASA Astrophysics Data System (ADS)
Nagarajan, Sudhagar; Schenk, Toni
2016-06-01
The registration of historical images plays a significant role in assessing changes in land topography over time. By comparing historical aerial images with recent data, geometric changes that have taken place over the years can be quantified. However, the lack of ground control information and precise camera parameters has limited scientists' ability to reliably incorporate historical images into change detection studies. Other limitations include the methods of determining identical points between recent and historical images, which has proven to be a cumbersome task due to continuous land cover changes. Our research demonstrates a method of registering historical images using Time Invariant Line (TIL) features. TIL features are different representations of the same line features in multi-temporal data without explicit point-to-point or straight line-to-straight line correspondence. We successfully determined the exterior orientation of historical images by minimizing the area formed between corresponding TIL features in recent and historical images. We then tested the feasibility of the approach with synthetic and real data and analyzed the results. Based on our analysis, this method shows promise for long-term 3D change detection studies.
2008-09-01
of magnetic UXO. The prototype STAR Sensor comprises: a) A cubic array of eight fluxgate magnetometers . b) A 24-channel data acquisition/signal...array (shaded boxes) of eight low noise Triaxial Fluxgate Magnetometers (TFM) develops 24 channels of vector B- field data. Processor hardware
Comparing Behavior Modification Approaches to Habit Decrement--Smoking
ERIC Educational Resources Information Center
Wagner, M. K.; Bragg, R. A.
1970-01-01
Five methods for control of smoking were tested on 54 Ss using systematic desensitization (SD), covert sensitization (CS), a combination of SD and CS (SD-CS), relaxation, and counseling. The SD-CS treatment was superior at all points during the treatment, though not significantly superior to all other treatments at all time points. Reprints from…
USDA-ARS?s Scientific Manuscript database
Accurate and timely spatial predictions of vegetation cover from remote imagery are an important data source for natural resource management. High-quality in situ data are needed to develop and validate these products. Point-intercept sampling techniques are a common method for obtaining quantitativ...
Incorporating availability for detection in estimates of bird abundance
Diefenbach, D.R.; Marshall, M.R.; Mattice, J.A.; Brauning, D.W.
2007-01-01
Several bird-survey methods have been proposed that provide an estimated detection probability so that bird-count statistics can be used to estimate bird abundance. However, some of these estimators adjust counts of birds observed by the probability that a bird is detected and assume that all birds are available to be detected at the time of the survey. We marked male Henslow's Sparrows (Ammodramus henslowii) and Grasshopper Sparrows (A. savannarum) and monitored their behavior during May-July 2002 and 2003 to estimate the proportion of time they were available for detection. We found that the availability of Henslow's Sparrows declined in late June to <10% for 5- or 10-min point counts when a male had to sing and be visible to the observer; but during 20 May-19 June, males were available for detection 39.1% (SD = 27.3) of the time for 5-min point counts and 43.9% (SD = 28.9) of the time for 10-min point counts (n = 54). We detected no temporal changes in availability for Grasshopper Sparrows, but estimated availability to be much lower for 5-min point counts (10.3%, SD = 12.2) than for 10-min point counts (19.2%, SD = 22.3) when males had to be visible and sing during the sampling period (n = 80). For distance sampling, we estimated the availability of Henslow's Sparrows to be 44.2% (SD = 29.0) and the availability of Grasshopper Sparrows to be 20.6% (SD = 23.5). We show how our estimates of availability can be incorporated in the abundance and variance estimators for distance sampling and modify the abundance and variance estimators for the double-observer method. Methods that directly estimate availability from bird counts but also incorporate detection probabilities need further development and will be important for obtaining unbiased estimates of abundance for these species.
Dai, James Y.; Hughes, James P.
2012-01-01
The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448
NASA Astrophysics Data System (ADS)
Chaidee, S.; Pakawanwong, P.; Suppakitpaisarn, V.; Teerasawat, P.
2017-09-01
In this work, we devise an efficient method for the land-use optimization problem based on Laguerre Voronoi diagram. Previous Voronoi diagram-based methods are more efficient and more suitable for interactive design than discrete optimization-based method, but, in many cases, their outputs do not satisfy area constraints. To cope with the problem, we propose a force-directed graph drawing algorithm, which automatically allocates generating points of Voronoi diagram to appropriate positions. Then, we construct a Laguerre Voronoi diagram based on these generating points, use linear programs to adjust each cell, and reconstruct the diagram based on the adjustment. We adopt the proposed method to the practical case study of Chiang Mai University's allocated land for a mixed-use complex. For this case study, compared to other Voronoi diagram-based method, we decrease the land allocation error by 62.557 %. Although our computation time is larger than the previous Voronoi-diagram-based method, it is still suitable for interactive design.
NASA Astrophysics Data System (ADS)
Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka
Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.
NASA Astrophysics Data System (ADS)
Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.
2016-06-01
Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.
A method for analyzing temporal patterns of variability of a time series from Poincare plots.
Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E
2012-07-01
The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.
NASA Astrophysics Data System (ADS)
Park, Sun-Youp; Choi, Jin; Roh, Dong-Goo; Park, Maru; Jo, Jung Hyun; Yim, Hong-Suh; Park, Young-Sik; Bae, Young-Ho; Park, Jang-Hyun; Moon, Hong-Kyu; Choi, Young-Jun; Cho, Sungki; Choi, Eun-Jung
2016-09-01
As described in the previous paper (Park et al. 2013), the detector subsystem of optical wide-field patrol (OWL) provides many observational data points of a single artificial satellite or space debris in the form of small streaks, using a chopper system and a time tagger. The position and the corresponding time data are matched assuming that the length of a streak on the CCD frame is proportional to the time duration of the exposure during which the chopper blades do not obscure the CCD window. In the previous study, however, the length was measured using the diagonal of the rectangle of the image area containing the streak; the results were quite ambiguous and inaccurate, allowing possible matching error of positions and time data. Furthermore, because only one (position, time) data point is created from one streak, the efficiency of the observation decreases. To define the length of a streak correctly, it is important to locate the endpoints of a streak. In this paper, a method using a differential convolution mask pattern is tested. This method can be used to obtain the positions where the pixel values are changed sharply. These endpoints can be regarded as directly detected positional data, and the number of data points is doubled by this result.
Churkin, Alexander; Barash, Danny
2008-01-01
Background RNAmute is an interactive Java application which, given an RNA sequence, calculates the secondary structure of all single point mutations and organizes them into categories according to their similarity to the predicted structure of the wild type. The secondary structure predictions are performed using the Vienna RNA package. A more efficient implementation of RNAmute is needed, however, to extend from the case of single point mutations to the general case of multiple point mutations, which may often be desired for computational predictions alongside mutagenesis experiments. But analyzing multiple point mutations, a process that requires traversing all possible mutations, becomes highly expensive since the running time is O(nm) for a sequence of length n with m-point mutations. Using Vienna's RNAsubopt, we present a method that selects only those mutations, based on stability considerations, which are likely to be conformational rearranging. The approach is best examined using the dot plot representation for RNA secondary structure. Results Using RNAsubopt, the suboptimal solutions for a given wild-type sequence are calculated once. Then, specific mutations are selected that are most likely to cause a conformational rearrangement. For an RNA sequence of about 100 nts and 3-point mutations (n = 100, m = 3), for example, the proposed method reduces the running time from several hours or even days to several minutes, thus enabling the practical application of RNAmute to the analysis of multiple-point mutations. Conclusion A highly efficient addition to RNAmute that is as user friendly as the original application but that facilitates the practical analysis of multiple-point mutations is presented. Such an extension can now be exploited prior to site-directed mutagenesis experiments by virologists, for example, who investigate the change of function in an RNA virus via mutations that disrupt important motifs in its secondary structure. A complete explanation of the application, called MultiRNAmute, is available at [1]. PMID:18445289
De Francesco, Vincenzo; Zullo, Angelo; Giorgio, Floriana; Saracino, Ilaria; Zaccaro, Cristina; Hassan, Cesare; Ierardi, Enzo; Di Leo, Alfredo; Fiorini, Giulia; Castelli, Valentina; Lo Re, Giovanna; Vaira, Dino
2014-03-01
Primary clarithromycin resistance is the main factor affecting the efficacy of Helicobacter pylori therapy. This study aimed: (i) to assess the concordance between phenotypic (culture) and genotypic (real-time PCR) tests in resistant strains; (ii) to search, in the case of disagreement between the methods, for point mutations other than those reported as the most frequent in Europe; and (iii) to compare the MICs associated with the single point mutations. In order to perform real-time PCR, we retrieved biopsies from patients in whom H. pylori infection was successful diagnosed by bacterial culture and clarithromycin resistance was assessed using the Etest. Only patients who had never been previously treated, and with H. pylori strains that were either resistant exclusively to clarithromycin or without any resistance, were included. Biopsies from 82 infected patients were analysed, including 42 strains that were clarithromycin resistant and 40 that were clarithromycin susceptible on culture. On genotypic analysis, at least one of the three most frequently reported point mutations (A2142C, A2142G and A2143G) was detected in only 23 cases (54.8%), with a concordance between the two methods of 0.67. Novel point mutations (A2115G, G2141A and A2144T) were detected in a further 14 out of 19 discordant cases, increasing the resistance detection rate of PCR to 88% (P<0.001; odds ratio 6.1, 95% confidence interval 2-18.6) and the concordance to 0.81. No significant differences in MIC values among different point mutations were observed. This study suggests that: (i) the prevalence of the usually reported point mutations may be decreasing, with a concomitant emergence of new mutations; (ii) PCR-based methods should search for at least six point mutations to achieve good accuracy in detecting clarithromycin resistance; and (iii) none of the tested point mutations is associated with significantly higher MIC values than the others.
Overview of fast algorithm in 3D dynamic holographic display
NASA Astrophysics Data System (ADS)
Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian
2013-08-01
3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.
Techniques for improving transients in learning control systems
NASA Technical Reports Server (NTRS)
Chang, C.-K.; Longman, Richard W.; Phan, Minh
1992-01-01
A discrete modern control formulation is used to study the nature of the transient behavior of the learning process during repetitions. Several alternative learning control schemes are developed to improve the transient performance. These include a new method using an alternating sign on the learning gain, which is very effective in limiting peak transients and also very useful in multiple-input, multiple-output systems. Other methods include learning at an increasing number of points progressing with time, or an increasing number of points of increasing density.
Assessing criticality in seismicity by entropy
NASA Astrophysics Data System (ADS)
Goltz, C.
2003-04-01
There is an ongoing discussion whether the Earth's crust is in a critical state and whether this state is permanent or intermittent. Intermittent criticality would allow specification of time-dependent hazard in principle. Analysis of a spatio-temporally evolving synthetic critical point phenomenon and of real seismicity using configurational entropy shows that the method is a suitable approach for the characterisation of critical point dynamics. Results obtained rather support the notion of intermittent criticality in earthquakes. Statistical significance of the findings is assessed by the method of surrogate data.
Automatic short axis orientation of the left ventricle in 3D ultrasound recordings
NASA Astrophysics Data System (ADS)
Pedrosa, João.; Heyde, Brecht; Heeren, Laurens; Engvall, Jan; Zamorano, Jose; Papachristidis, Alexandros; Edvardsen, Thor; Claus, Piet; D'hooge, Jan
2016-04-01
The recent advent of three-dimensional echocardiography has led to an increased interest from the scientific community in left ventricle segmentation frameworks for cardiac volume and function assessment. An automatic orientation of the segmented left ventricular mesh is an important step to obtain a point-to-point correspondence between the mesh and the cardiac anatomy. Furthermore, this would allow for an automatic division of the left ventricle into the standard 17 segments and, thus, fully automatic per-segment analysis, e.g. regional strain assessment. In this work, a method for fully automatic short axis orientation of the segmented left ventricle is presented. The proposed framework aims at detecting the inferior right ventricular insertion point. 211 three-dimensional echocardiographic images were used to validate this framework by comparison to manual annotation of the inferior right ventricular insertion point. A mean unsigned error of 8, 05° +/- 18, 50° was found, whereas the mean signed error was 1, 09°. Large deviations between the manual and automatic annotations (> 30°) only occurred in 3, 79% of cases. The average computation time was 666ms in a non-optimized MATLAB environment, which potentiates real-time application. In conclusion, a successful automatic real-time method for orientation of the segmented left ventricle is proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, S; Zhu, X; Zhang, M
Purpose Half-beam block is a field matching technique frequently used in radiotherapy. With no setup error, a well calibrated linac, and no internal organ motion, two photon fields can be matched seamlessly dosimetry-wise with their central axes passing the match line. However, in actual clinical situations, internal organ motion is often inevitable. This study was conducted to investigate its influence on radiation dose to patient internal points directly under the matching line. Methods A clinical setting is modeled as two half-space (x<0 and x<0) radiation fields that are turned on sequentially with a time gap of integer times of themore » patient internal organ motion period (T{sub 0}). Our point of interest moves with patient internal organs periodically and evenly in and out of the radiation fields, resulting in an average location at x=0. When the fields are delivered without any motion management, the initial phase of the point’s movement is unknown. Statistical methods are used to compute the expected value () and variance (σ) of the point dose given the uncertainty. Results Analytical solutions are obtained for and s of dose received by a point directly under the match line. is proportional to the total beam-on time (T1), and σ demonstrates previously unknown periodic behavior. /« less
A QRS Detection and R Point Recognition Method for Wearable Single-Lead ECG Devices.
Chen, Chieh-Li; Chuang, Chun-Te
2017-08-26
In the new-generation wearable Electrocardiogram (ECG) system, signal processing with low power consumption is required to transmit data when detecting dangerous rhythms and to record signals when detecting abnormal rhythms. The QRS complex is a combination of three of the graphic deflection seen on a typical ECG. This study proposes a real-time QRS detection and R point recognition method with low computational complexity while maintaining a high accuracy. The enhancement of QRS segments and restraining of P and T waves are carried out by the proposed ECG signal transformation, which also leads to the elimination of baseline wandering. In this study, the QRS fiducial point is determined based on the detected crests and troughs of the transformed signal. Subsequently, the R point can be recognized based on four QRS waveform templates and preliminary heart rhythm classification can be also achieved at the same time. The performance of the proposed approach is demonstrated using the benchmark of the MIT-BIH Arrhythmia Database, where the QRS detected sensitivity (Se) and positive prediction (+P) are 99.82% and 99.81%, respectively. The result reveals the approach's advantage of low computational complexity, as well as the feasibility of the real-time application on a mobile phone and an embedded system.
Detection of longitudinal visual field progression in glaucoma using machine learning.
Yousefi, Siamak; Kiwaki, Taichi; Zheng, Yuhui; Suigara, Hiroki; Asaoka, Ryo; Murata, Hiroshi; Lemij, Hans; Yamanishi, Kenji
2018-06-16
Global indices of standard automated perimerty are insensitive to localized losses, while point-wise indices are sensitive but highly variable. Region-wise indices sit in between. This study introduces a machine-learning-based index for glaucoma progression detection that outperforms global, region-wise, and point-wise indices. Development and comparison of a prognostic index. Visual fields from 2085 eyes of 1214 subjects were used to identify glaucoma progression patterns using machine learning. Visual fields from 133 eyes of 71 glaucoma patients were collected 10 times over 10 weeks to provide a no-change, test-retest dataset. The parameters of all methods were identified using visual field sequences in the test-retest dataset to meet fixed 95% specificity. An independent dataset of 270 eyes of 136 glaucoma patients and survival analysis were utilized to compare methods. The time to detect progression in 25% of the eyes in the longitudinal dataset using global mean deviation (MD) was 5.2 years (95% confidence interval, 4.1 - 6.5 years); 4.5 years (4.0 - 5.5) using region-wise, 3.9 years (3.5 - 4.6) using point-wise, and 3.5 years (3.1 - 4.0) using machine learning analysis. The time until 25% of eyes showed subsequently confirmed progression after two additional visits were included were 6.6 years (5.6 - 7.4 years), 5.7 years (4.8 - 6.7), 5.6 years (4.7 - 6.5), and 5.1 years (4.5 - 6.0) for global, region-wise, point-wise, and machine learning analyses, respectively. Machine learning analysis detects progressing eyes earlier than other methods consistently, with or without confirmation visits. In particular, machine learning detects more slowly progressing eyes than other methods. Copyright © 2018 Elsevier Inc. All rights reserved.
Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem
NASA Astrophysics Data System (ADS)
Omagari, Hiroki; Higashino, Shin-Ichiro
2018-04-01
In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.
Roads Data Conflation Using Update High Resolution Satellite Images
NASA Astrophysics Data System (ADS)
Abdollahi, A.; Riyahi Bakhtiari, H. R.
2017-11-01
Urbanization, industrialization and modernization are rapidly growing in developing countries. New industrial cities, with all the problems brought on by rapid population growth, need infrastructure to support the growth. This has led to the expansion and development of the road network. A great deal of road network data has made by using traditional methods in the past years. Over time, a large amount of descriptive information has assigned to these map data, but their geometric accuracy and precision is not appropriate to today's need. In this regard, the improvement of the geometric accuracy of road network data by preserving the descriptive data attributed to them and updating of the existing geo databases is necessary. Due to the size and extent of the country, updating the road network maps using traditional methods is time consuming and costly. Conversely, using remote sensing technology and geographic information systems can reduce costs, save time and increase accuracy and speed. With increasing the availability of high resolution satellite imagery and geospatial datasets there is an urgent need to combine geographic information from overlapping sources to retain accurate data, minimize redundancy, and reconcile data conflicts. In this research, an innovative method for a vector-to-imagery conflation by integrating several image-based and vector-based algorithms presented. The SVM method for image classification and Level Set method used to extract the road the different types of road intersections extracted from imagery using morphological operators. For matching the extracted points and to find the corresponding points, matching function which uses the nearest neighborhood method was applied. Finally, after identifying the matching points rubber-sheeting method used to align two datasets. Two residual and RMSE criteria used to evaluate accuracy. The results demonstrated excellent performance. The average root-mean-square error decreased from 11.8 to 4.1 m.
An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base
NASA Astrophysics Data System (ADS)
Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi
Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.
Liu, Ying; ZENG, Donglin; WANG, Yuanjia
2014-01-01
Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116
Electronic method for autofluorography of macromolecules on two-D matrices
Davidson, Jackson B.; Case, Arthur L.
1983-01-01
A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100-1000 times.
A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai
2017-05-01
To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.
Hongyi Xu; Barbic, Jernej
2017-01-01
We present an algorithm for fast continuous collision detection between points and signed distance fields, and demonstrate how to robustly use it for 6-DoF haptic rendering of contact between objects with complex geometry. Continuous collision detection is often needed in computer animation, haptics, and virtual reality applications, but has so far only been investigated for polygon (triangular) geometry representations. We demonstrate how to robustly and continuously detect intersections between points and level sets of the signed distance field. We suggest using an octree subdivision of the distance field for fast traversal of distance field cells. We also give a method to resolve continuous collisions between point clouds organized into a tree hierarchy and a signed distance field, enabling rendering of contact between rigid objects with complex geometry. We investigate and compare two 6-DoF haptic rendering methods now applicable to point-versus-distance field contact for the first time: continuous integration of penalty forces, and a constraint-based method. An experimental comparison to discrete collision detection demonstrates that the continuous method is more robust and can correctly resolve collisions even under high velocities and during complex contact.
Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm
Yan, Li; Xie, Hong; Chen, Changjun
2017-01-01
Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100
Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.
Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun
2017-08-29
Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.
Deconvolution of mixing time series on a graph
Blocker, Alexander W.; Airoldi, Edoardo M.
2013-01-01
In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135
Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2013-01-01
Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.
Turbulent Statistics from Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2012-01-01
Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.
Distributed Optimization System
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2004-11-30
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Y.; Hu, X.; Guan, H.; Liu, P.
2016-06-01
The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.
Indicator saturation: a novel approach to detect multiple breaks in geodetic time series.
NASA Astrophysics Data System (ADS)
Jackson, L. P.; Pretis, F.; Williams, S. D. P.
2016-12-01
Geodetic time series can record long term trends, quasi-periodic signals at a variety of time scales from days to decades, and sudden breaks due to natural or anthropogenic causes. The causes of breaks range from instrument replacement to earthquakes to unknown (i.e. no attributable cause). Furthermore, breaks can be permanent or short-lived and range at least two orders of magnitude in size (mm to 100's mm). To account for this range of possible signal-characteristics requires a flexible time series method that can distinguish between true and false breaks, outliers and time-varying trends. One such method, Indicator Saturation (IS) comes from the field of econometrics where analysing stochastic signals in these terms is a common problem. The IS approach differs from alternative break detection methods by considering every point in the time series as a break until it is demonstrated statistically that it is not. A linear model is constructed with a break function at every point in time, and all but statistically significant breaks are removed through a general-to-specific model selection algorithm for more variables than observations. The IS method is flexible because it allows multiple breaks of different forms (e.g. impulses, shifts in the mean, and changing trends) to be detected, while simultaneously modelling any underlying variation driven by additional covariates. We apply the IS method to identify breaks in a suite of synthetic GPS time series used for the Detection of Offsets in GPS Experiments (DOGEX). We optimise the method to maximise the ratio of true-positive to false-positive detections, which improves estimates of errors in the long term rates of land motion currently required by the GPS community.
Omisore, Olatunji Mumini; Han, Shipeng; Ren, Lingxue; Zhang, Nannan; Ivanov, Kamen; Elazab, Ahmed; Wang, Lei
2017-08-01
Snake-like robot is an emerging form of serial-link manipulator with the morphologic design of biological snakes. The redundant robot can be used to assist medical experts in accessing internal organs with minimal or no invasion. Several snake-like robotic designs have been proposed for minimal invasive surgery, however, the few that were developed are yet to be fully explored for clinical procedures. This is due to lack of capability for full-fledged spatial navigation. In rare cases where such snake-like designs are spatially flexible, there exists no inverse kinematics (IK) solution with both precise control and fast response. In this study, we proposed a non-iterative geometric method for solving IK of lead-module of a snake-like robot designed for therapy or ablation of abdominal tumors. The proposed method is aimed at providing accurate and fast IK solution for given target points in the robot's workspace. n-1 virtual points (VPs) were geometrically computed and set as coordinates of intermediary joints in an n-link module. Suitable joint angles that can place the end-effector at given target points were then computed by vectorizing coordinates of the VPs, in addition to coordinates of the base point, target point, and tip of the first link in its default pose. The proposed method is applied to solve IK of two-link and redundant four-link modules. Both two-link and four-link modules were simulated with Robotics Toolbox in Matlab 8.3 (R2014a). Implementation result shows that the proposed method can solve IK of the spatially flexible robot with minimal error values. Furthermore, analyses of results from both modules show that the geometric method can reach 99.21 and 88.61% of points in their workspaces, respectively, with an error threshold of 1 mm. The proposed method is non-iterative and has a maximum execution time of 0.009 s. This paper focuses on solving IK problem of a spatially flexible robot which is part of a developmental project for abdominal surgery through minimal invasion or natural orifices. The study showed that the proposed geometric method can resolve IK of the snake-like robot with negligible error offset. Evaluation against well-known methods shows that the proposed method can reach several points in the robot's workspace with high accuracy and shorter computational time, simultaneously.
Multibody Parachute Flight Simulations for Planetary Entry Trajectories Using "Equilibrium Points"
NASA Technical Reports Server (NTRS)
Raiszadeh, Ben
2003-01-01
A method has been developed to reduce numerical stiffness and computer CPU requirements of high fidelity multibody flight simulations involving parachutes for planetary entry trajectories. Typical parachute entry configurations consist of entry bodies suspended from a parachute, connected by flexible lines. To accurately calculate line forces and moments, the simulations need to keep track of the point where the flexible lines meet (confluence point). In previous multibody parachute flight simulations, the confluence point has been modeled as a point mass. Using a point mass for the confluence point tends to make the simulation numerically stiff, because its mass is typically much less that than the main rigid body masses. One solution for stiff differential equations is to use a very small integration time step. However, this results in large computer CPU requirements. In the method described in the paper, the need for using a mass as the confluence point has been eliminated. Instead, the confluence point is modeled using an "equilibrium point". This point is calculated at every integration step as the point at which sum of all line forces is zero (static equilibrium). The use of this "equilibrium point" has the advantage of both reducing the numerical stiffness of the simulations, and eliminating the dynamical equations associated with vibration of a lumped mass on a high-tension string.
Finding Out Critical Points For Real-Time Path Planning
NASA Astrophysics Data System (ADS)
Chen, Wei
1989-03-01
Path planning for a mobile robot is a classic topic, but the path planning under real-time environment is a different issue. The system sources including sampling time, processing time, processes communicating time, and memory space are very limited for this type of application. This paper presents a method which abstracts the world representation from the sensory data and makes the decision as to which point will be a potentially critical point to span the world map by using incomplete knowledge about physical world and heuristic rule. Without any previous knowledge or map of the workspace, the robot will determine the world map by roving through the workspace. The computational complexity for building and searching such a map is not more than O( n2 ) The find-path problem is well-known in robotics. Given an object with an initial location and orientation, a goal location and orientation, and a set of obstacles located in space, the problem is to find a continuous path for the object from the initial position to the goal position which avoids collisions with obstacles along the way. There are a lot of methods to find a collision-free path in given environment. Techniques for solving this problem can be classified into three approaches: 1) the configuration space approach [1],[2],[3] which represents the polygonal obstacles by vertices in a graph. The idea is to determine those parts of the free space which a reference point of the moving object can occupy without colliding with any obstacles. A path is then found for the reference point through this truly free space. Dealing with rotations turns out to be a major difficulty with the approach, requiring complex geometric algorithms which are computationally expensive. 2) the direct representation of the free space using basic shape primitives such as convex polygons [4] and overlapping generalized cones [5]. 3) the combination of technique 1 and 2 [6] by which the space is divided into the primary convex region, overlap region and obstacle region, then obstacle boundaries with attribute values are represented by the vertices of the hypergraph. The primary convex region and overlap region are represented by hyperedges, the centroids of overlap form the critical points. The difficulty is generating segment graph and estimating of minimum path width. The all techniques mentioned above need previous knowledge about the world to make path planning and the computational cost is not low. They are not available in an unknow and uncertain environment. Due to limited system resources such as CPU time, memory size and knowledge about the special application in an intelligent system (such as mobile robot), it is necessary to use algorithms that provide the good decision which is feasible with the available resources in real time rather than the best answer that could be achieved in unlimited time with unlimited resources. A real-time path planner should meet following requirements: - Quickly abstract the representation of the world from the sensory data without any previous knowledge about the robot environment. - Easily update the world model to spell out the global-path map and to reflect changes in the robot environment. - Must make a decision of where the robot must go and which direction the range sensor should point to in real time with limited resources. The method presented here assumes that the data from range sensors has been processed by signal process unite. The path planner will guide the scan of range sensor, find critical points, make decision where the robot should go and which point is poten- tial critical point, generate the path map and monitor the robot moves to the given point. The program runs recursively until the goal is reached or the whole workspace is roved through.
Cabrieto, Jedelyn; Tuerlinckx, Francis; Kuppens, Peter; Grassmann, Mariel; Ceulemans, Eva
2017-06-01
Change point detection in multivariate time series is a complex task since next to the mean, the correlation structure of the monitored variables may also alter when change occurs. DeCon was recently developed to detect such changes in mean and\\or correlation by combining a moving windows approach and robust PCA. However, in the literature, several other methods have been proposed that employ other non-parametric tools: E-divisive, Multirank, and KCP. Since these methods use different statistical approaches, two issues need to be tackled. First, applied researchers may find it hard to appraise the differences between the methods. Second, a direct comparison of the relative performance of all these methods for capturing change points signaling correlation changes is still lacking. Therefore, we present the basic principles behind DeCon, E-divisive, Multirank, and KCP and the corresponding algorithms, to make them more accessible to readers. We further compared their performance through extensive simulations using the settings of Bulteel et al. (Biological Psychology, 98 (1), 29-42, 2014) implying changes in mean and in correlation structure and those of Matteson and James (Journal of the American Statistical Association, 109 (505), 334-345, 2014) implying different numbers of (noise) variables. KCP emerged as the best method in almost all settings. However, in case of more than two noise variables, only DeCon performed adequately in detecting correlation changes.
Testing deformation hypotheses by constraints on a time series of geodetic observations
NASA Astrophysics Data System (ADS)
Velsink, Hiddo
2018-01-01
In geodetic deformation analysis observations are used to identify form and size changes of a geodetic network, representing objects on the earth's surface. The network points are monitored, often continuously, because of suspected deformations. A deformation may affect many points during many epochs. The problem is that the best description of the deformation is, in general, unknown. To find it, different hypothesised deformation models have to be tested systematically for agreement with the observations. The tests have to be capable of stating with a certain probability the size of detectable deformations, and to be datum invariant. A statistical criterion is needed to find the best deformation model. Existing methods do not fulfil these requirements. Here we propose a method that formulates the different hypotheses as sets of constraints on the parameters of a least-squares adjustment model. The constraints can relate to subsets of epochs and to subsets of points, thus combining time series analysis and congruence model analysis. The constraints are formulated as nonstochastic observations in an adjustment model of observation equations. This gives an easy way to test the constraints and to get a quality description. The proposed method aims at providing a good discriminating method to find the best description of a deformation. The method is expected to improve the quality of geodetic deformation analysis. We demonstrate the method with an elaborate example.
Classification of Mobile Laser Scanning Point Clouds from Height Features
NASA Astrophysics Data System (ADS)
Zheng, M.; Lemmens, M.; van Oosterom, P.
2017-09-01
The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.
Downdating a time-varying square root information filter
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.
1990-01-01
A new method to efficiently downdate an estimate and covariance generated by a discrete time Square Root Information Filter (SRIF) is presented. The method combines the QR factor downdating algorithm of Gill and the decentralized SRIF algorithm of Bierman. Efficient removal of either measurements or a priori information is possible without loss of numerical integrity. Moreover, the method includes features for detecting potential numerical degradation. Performance on a 300 parameter system with 5800 data points shows that the method can be used in real time and hence is a promising tool for interactive data analysis. Additionally, updating a time-varying SRIF filter with either additional measurements or a priori information proceeds analogously.
De Los Ríos, F. A.; Paluszny, M.
2015-01-01
We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281
Garashchuk, Sophya; Rassolov, Vitaly A
2008-07-14
Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.
Tracking Subpixel Targets with Critically Sampled Optical Sensors
2012-09-01
5 [32]. The Viterbi algorithm is a dynamic programming method for calculating the MAP in O(tn2) time . The most common use of this algorithm is in the... method to detect subpixel point targets using the sensor’s PSF as an identifying characteristic. Using matched filtering theory, a measure is defined to...ocean surface beneath the cloud will have a different distribution. While the basic methods will adapt to changes in cloud cover over time , it is also
Agreement of Tracing and Direct Viewing Techniques for Cervical Vertebral Maturation Assessment.
Wiwatworakul, Opas; Manosudprasit, Montian; Pisek, Poonsak; Chatrchaiwiwatana, Supaporn; Wangsrimongkol, Tasanee
2015-08-01
This study aimed to evaluate agreement among three methods for cervical vertebral maturation (CVM) assessment, comprising direct viewing, tracing only, and tracing with digitized points. Two examiners received training and tests of reliability with each CVM method before evaluation of agreement among methods. The subjects were 96 female-cleft lateral cephalometric radiographs (films of eight subjects for each age ranged from seven to 18 years). The examiners interpreted CVM stages of the subjects with four-week interval between uses of each method. The range of weighted kappa values for paired comparisons among the three methods were: 0.96-0.98 for direct viewing and tracing only comparison; 0.93-0.94 for direct viewing and tracing with digitized points comparison; and 0.96-0.97 for tracing only and tracing with digitized points comparison. The intraclass correlation coefficient (ICC) value among the three methods was 0.95. These results indicated very good agreement among methods. Use of direct viewing is suitable for CVM assessment without spending more time for tracing. However, the three methods might be used interchangeably.
Ground-based measurements of ionospheric dynamics
NASA Astrophysics Data System (ADS)
Kouba, Daniel; Chum, Jaroslav
2018-05-01
Different methods are used to research and monitor the ionospheric dynamics using ground measurements: Digisonde Drift Measurements (DDM) and Continuous Doppler Sounding (CDS). For the first time, we present comparison between both methods on specific examples. Both methods provide information about the vertical drift velocity component. The DDM provides more information about the drift velocity vector and detected reflection points. However, the method is limited by the relatively low time resolution. In contrast, the strength of CDS is its high time resolution. The discussed methods can be used for real-time monitoring of medium scale travelling ionospheric disturbances. We conclude that it is advantageous to use both methods simultaneously if possible. The CDS is then applied for the disturbance detection and analysis, and the DDM is applied for the reflection height control.
Mandel, Micha; Gauthier, Susan A; Guttmann, Charles R G; Weiner, Howard L; Betensky, Rebecca A
2007-12-01
The expanded disability status scale (EDSS) is an ordinal score that measures progression in multiple sclerosis (MS). Progression is defined as reaching EDSS of a certain level (absolute progression) or increasing of one point of EDSS (relative progression). Survival methods for time to progression are not adequate for such data since they do not exploit the EDSS level at the end of follow-up. Instead, we suggest a Markov transitional model applicable for repeated categorical or ordinal data. This approach enables derivation of covariate-specific survival curves, obtained after estimation of the regression coefficients and manipulations of the resulting transition matrix. Large sample theory and resampling methods are employed to derive pointwise confidence intervals, which perform well in simulation. Methods for generating survival curves for time to EDSS of a certain level, time to increase of EDSS of at least one point, and time to two consecutive visits with EDSS greater than three are described explicitly. The regression models described are easily implemented using standard software packages. Survival curves are obtained from the regression results using packages that support simple matrix calculation. We present and demonstrate our method on data collected at the Partners MS center in Boston, MA. We apply our approach to progression defined by time to two consecutive visits with EDSS greater than three, and calculate crude (without covariates) and covariate-specific curves.
Chen, Xiaofeng; Song, Qiankun; Li, Zhongshan; Zhao, Zhenjiang; Liu, Yurong
2018-07-01
This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively. Furthermore, a uniform sufficient condition on the existence, uniqueness, and global asymptotical stability of the equilibrium point is obtained for both continuous-time QVNNs and their discrete-time version. Finally, two numerical examples are provided to substantiate the effectiveness of the proposed results.
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.
2012-04-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
Method and apparatus for ultrasonic characterization through the thickness direction of a moving web
Jackson, Theodore; Hall, Maclin S.
2001-01-01
A method and apparatus for determining the caliper and/or the ultrasonic transit time through the thickness direction of a moving web of material using ultrasonic pulses generated by a rotatable wheel ultrasound apparatus. The apparatus includes a first liquid-filled tire and either a second liquid-filled tire forming a nip or a rotatable cylinder that supports a thin moving web of material such as a moving web of paper and forms a nip with the first liquid-filled tire. The components of ultrasonic transit time through the tires and fluid held within the tires may be resolved and separately employed to determine the separate contributions of the two tire thicknesses and the two fluid paths to the total path length that lies between two ultrasonic transducer surfaces contained within the tires in support of caliper measurements. The present invention provides the benefit of obtaining a transit time and caliper measurement at any point in time as a specimen passes through the nip of rotating tires and eliminates inaccuracies arising from nonuniform tire circumferential thickness by accurately retaining point-to-point specimen transit time and caliper variation information, rather than an average obtained through one or more tire rotations. Morever, ultrasonic transit time through the thickness direction of a moving web may be determined independent of small variations in the wheel axle spacing, tire thickness, and liquid and tire temperatures.
NASA Astrophysics Data System (ADS)
Vu, Tinh Thi; Kiesel, Jens; Guse, Bjoern; Fohrer, Nicola
2017-04-01
The damming of rivers causes one of the most considerable impacts of our society on the riverine environment. More than 50% of the world's streams and rivers are currently impounded by dams before reaching the oceans. The construction of dams is of high importance in developing and emerging countries, i.e. for power generation and water storage. In the Vietnamese Vu Gia - Thu Bon Catchment (10,350 km2), about 23 dams were built during the last decades and store approximately 2,156 billion m3 of water. The water impoundment in 10 dams in upstream regions amounts to 17 % of the annual discharge volume. It is expected that impacts from these dams have altered the natural flow regime. However, up to now it is unclear how the flow regime was altered. For this, it needs to be investigated at what point in time these changes became significant and detectable. Many approaches exist to detect changes in stationary or consistency of hydrological records using statistical analysis of time series for the pre- and post-dam period. The objective of this study is to reliably detect and assess hydrologic shifts occurring in the discharge regime of an anthropogenically influenced river basin, mainly affected by the construction of dams. To achieve this, we applied nine available change-point tests to detect change in mean, variance and median on the daily and annual discharge records at two main gauges of the basin. The tests yield conflicting results: The majority of tests found abrupt changes that coincide with the damming-period, while others did not. To interpret how significant the changes in discharge regime are, and to which different properties of the time series each test responded, we calculated Indicators of Hydrologic Alteration (IHAs) for the time period before and after the detected change points. From the results, we can deduce, that the change point tests are influenced in different levels by different indicator groups (magnitude, duration, frequency, etc) and that within the indicator groups, some indicators are more sensitive than others. For instance, extreme low-flow, especially 7- and, 30-day minima and mean minimum low flow, as well as the variability of monthly flow are highly-sensitive to most detected change points. Our study clearly shows that, the detected change points depend on which test is chosen. For an objective assessment of change points, it is therefore necessary to explain the change points by calculating differences in IHAs. This analysis can be used to assess which change point method reacts to which type of hydrological change and, more importantly, it can be used to rank the change points according to their overall impact on the discharge regime. This leads to an improved evaluation of hydrologic change-points caused by anthropogenic impacts. Our study clearly shows that, the detected change points depend on which test is chosen. For an objective assessment of change points, it is therefore necessary to explain the change points by calculating differences in IHAs. This analysis can be used to assess which change point method reacts to which type of hydrological change and, more importantly, it can be used to rank the change points according to their overall impact on the discharge regime. This leads to an improved evaluation of hydrologic change-points caused by anthropogenic impacts.
NASA Astrophysics Data System (ADS)
Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo
2017-06-01
Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate that the proposed method is able to recognize street furniture in a practical scenario. Remaining difficult cases are touching objects, like a lamp pole close to a tree.
Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data
NASA Astrophysics Data System (ADS)
Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.
2017-12-01
The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.
Continuous Control Artificial Potential Function Methods and Optimal Control
2014-03-27
21 CW Clohessy - Wiltshire . . . . . . . . . . . . . . . . . . . . . . 26 CV Chase Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . 26 TV Target... Clohessy - Wiltshire equa- tions2) until the time rate of change of potential became nonnegative. At that time, a thrust impulse was applied to make the...3.2. 2The Clohessy - Wiltshire equations are introduced in Section 3.5. 7 to eliminate oscillation around the goal point [8, 9]. Such a method is
Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran
2015-10-01
Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors' method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors' method ranks 24 of 39. According to the index of the maximum shear stretch, the authors' method is also efficient to describe the discontinuous motion at the lung boundaries. By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors' method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.
GPU surface extraction using the closest point embedding
NASA Astrophysics Data System (ADS)
Kim, Mark; Hansen, Charles
2015-01-01
Isosurface extraction is a fundamental technique used for both surface reconstruction and mesh generation. One method to extract well-formed isosurfaces is a particle system; unfortunately, particle systems can be slow. In this paper, we introduce an enhanced parallel particle system that uses the closest point embedding as the surface representation to speedup the particle system for isosurface extraction. The closest point embedding is used in the Closest Point Method (CPM), a technique that uses a standard three dimensional numerical PDE solver on two dimensional embedded surfaces. To fully take advantage of the closest point embedding, it is coupled with a Barnes-Hut tree code on the GPU. This new technique produces well-formed, conformal unstructured triangular and tetrahedral meshes from labeled multi-material volume datasets. Further, this new parallel implementation of the particle system is faster than any known methods for conformal multi-material mesh extraction. The resulting speed-ups gained in this implementation can reduce the time from labeled data to mesh from hours to minutes and benefits users, such as bioengineers, who employ triangular and tetrahedral meshes
Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows
NASA Technical Reports Server (NTRS)
Palmer, Grant; Venkatapathy, Ethiraj
1995-01-01
Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
Improving TWSTFT short-term stability by network time transfer.
Tseng, Wen-Hung; Lin, Shinn-Yan; Feng, Kai-Ming; Fujieda, M; Maeno, H
2010-01-01
Two-way satellite time and frequency transfer (TWSTFT) is one of the major techniques to compare the atomic time scales between timing laboratories. As more and more TWSTFT measurements have been performed, the large number of point-to-point 2-way time transfer links has grown to be a complex network. For future improvement of the TWSTFT performance, it is important to reduce measurement noise of the TWSTFT results. One method is using TWSTFT network time transfer. The Asia-Pacific network is an exceptional case of simultaneous TWSTFT measurements. Some indirect links through relay stations show better shortterm stabilities than the direct link because the measurement noise may be neutralized in a simultaneous measurement network. In this paper, the authors propose a feasible method to improve the short-term stability by combining the direct and indirect links in the network. Through the comparisons of time deviation (TDEV), the results of network time transfer exhibit clear improved short-term stabilities. For the links used to compare 2 hydrogen masers, the average gain of TDEV at averaging times of 1 h is 22%. As TWSTFT short-term stability can be improved by network time transfer, the network may allow a larger number of simultaneously transmitting stations.
Particle Swarm Optimization of Low-Thrust, Geocentric-to-Halo-Orbit Transfers
NASA Astrophysics Data System (ADS)
Abraham, Andrew J.
Missions to Lagrange points are becoming increasingly popular amongst spacecraft mission planners. Lagrange points are locations in space where the gravity force from two bodies, and the centrifugal force acting on a third body, cancel. To date, all spacecraft that have visited a Lagrange point have done so using high-thrust, chemical propulsion. Due to the increasing availability of low-thrust (high efficiency) propulsive devices, and their increasing capability in terms of fuel efficiency and instantaneous thrust, it has now become possible for a spacecraft to reach a Lagrange point orbit without the aid of chemical propellant. While at any given time there are many paths for a low-thrust trajectory to take, only one is optimal. The traditional approach to spacecraft trajectory optimization utilizes some form of gradient-based algorithm. While these algorithms offer numerous advantages, they also have a few significant shortcomings. The three most significant shortcomings are: (1) the fact that an initial guess solution is required to initialize the algorithm, (2) the radius of convergence can be quite small and can allow the algorithm to become trapped in local minima, and (3) gradient information is not always assessable nor always trustworthy for a given problem. To avoid these problems, this dissertation is focused on optimizing a low-thrust transfer trajectory from a geocentric orbit to an Earth-Moon, L1, Lagrange point orbit using the method of Particle Swarm Optimization (PSO). The PSO method is an evolutionary heuristic that was originally written to model birds swarming to locate hidden food sources. This PSO method will enable the exploration of the invariant stable manifold of the target Lagrange point orbit in an effort to optimize the spacecraft's low-thrust trajectory. Examples of these optimized trajectories are presented and contrasted with those found using traditional, gradient-based approaches. In summary, the results of this dissertation find that the PSO method does, indeed, successfully optimize the low-thrust trajectory transfer problem without the need for initial guessing. Furthermore, a two-degree-of-freedom PSO problem formulation significantly outperformed a one-degree-of-freedom formulation by at least an order of magnitude, in terms of CPU time. Finally, the PSO method is also used to solve a traditional, two-burn, impulsive transfer to a Lagrange point orbit using a hybrid optimization algorithm that incorporates a gradient-based shooting algorithm as a pre-optimizer. Surprisingly, the results of this study show that "fast" transfers outperform "slow" transfers in terms of both Deltav and time of flight.
NASA Technical Reports Server (NTRS)
Banyukevich, A.; Ziolkovski, K.
1975-01-01
A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.
Empirical mode decomposition and long-range correlation analysis of sunspot time series
NASA Astrophysics Data System (ADS)
Zhou, Yu; Leung, Yee
2010-12-01
Sunspots, which are the best known and most variable features of the solar surface, affect our planet in many ways. The number of sunspots during a period of time is highly variable and arouses strong research interest. When multifractal detrended fluctuation analysis (MF-DFA) is employed to study the fractal properties and long-range correlation of the sunspot series, some spurious crossover points might appear because of the periodic and quasi-periodic trends in the series. However many cycles of solar activities can be reflected by the sunspot time series. The 11-year cycle is perhaps the most famous cycle of the sunspot activity. These cycles pose problems for the investigation of the scaling behavior of sunspot time series. Using different methods to handle the 11-year cycle generally creates totally different results. Using MF-DFA, Movahed and co-workers employed Fourier truncation to deal with the 11-year cycle and found that the series is long-range anti-correlated with a Hurst exponent, H, of about 0.12. However, Hu and co-workers proposed an adaptive detrending method for the MF-DFA and discovered long-range correlation characterized by H≈0.74. In an attempt to get to the bottom of the problem in the present paper, empirical mode decomposition (EMD), a data-driven adaptive method, is applied to first extract the components with different dominant frequencies. MF-DFA is then employed to study the long-range correlation of the sunspot time series under the influence of these components. On removing the effects of these periods, the natural long-range correlation of the sunspot time series can be revealed. With the removal of the 11-year cycle, a crossover point located at around 60 months is discovered to be a reasonable point separating two different time scale ranges, H≈0.72 and H≈1.49. And on removing all cycles longer than 11 years, we have H≈0.69 and H≈0.28. The three cycle-removing methods—Fourier truncation, adaptive detrending and the proposed EMD-based method—are further compared, and possible reasons for the different results are given. Two numerical experiments are designed for quantitatively evaluating the performances of these three methods in removing periodic trends with inexact/exact cycles and in detecting the possible crossover points.
Simultaneous measurement of passage through the restriction point and MCM loading in single cells
Håland, T. W.; Boye, E.; Stokke, T.; Grallert, B.; Syljuåsen, R. G.
2015-01-01
Passage through the Retinoblastoma protein (RB1)-dependent restriction point and the loading of minichromosome maintenance proteins (MCMs) are two crucial events in G1-phase that help maintain genome integrity. Deregulation of these processes can cause uncontrolled proliferation and cancer development. Both events have been extensively characterized individually, but their relative timing and inter-dependence remain less clear. Here, we describe a novel method to simultaneously measure MCM loading and passage through the restriction point. We exploit that the RB1 protein is anchored in G1-phase but is released when hyper-phosphorylated at the restriction point. After extracting cells with salt and detergent before fixation we can simultaneously measure, by flow cytometry, the loading of MCMs onto chromatin and RB1 binding to determine the order of the two events in individual cells. We have used this method to examine the relative timing of the two events in human cells. Whereas in BJ fibroblasts released from G0-phase MCM loading started mainly after the restriction point, in a significant fraction of exponentially growing BJ and U2OS osteosarcoma cells MCMs were loaded in G1-phase with RB1 anchored, demonstrating that MCM loading can also start before the restriction point. These results were supported by measurements in synchronized U2OS cells. PMID:26250117
Dialdestoro, Kevin; Sibbesen, Jonas Andreas; Maretty, Lasse; Raghwani, Jayna; Gall, Astrid; Kellam, Paul; Pybus, Oliver G.; Hein, Jotun; Jenkins, Paul A.
2016-01-01
Human immunodeficiency virus (HIV) is a rapidly evolving pathogen that causes chronic infections, so genetic diversity within a single infection can be very high. High-throughput “deep” sequencing can now measure this diversity in unprecedented detail, particularly since it can be performed at different time points during an infection, and this offers a potentially powerful way to infer the evolutionary dynamics of the intrahost viral population. However, population genomic inference from HIV sequence data is challenging because of high rates of mutation and recombination, rapid demographic changes, and ongoing selective pressures. In this article we develop a new method for inference using HIV deep sequencing data, using an approach based on importance sampling of ancestral recombination graphs under a multilocus coalescent model. The approach further extends recent progress in the approximation of so-called conditional sampling distributions, a quantity of key interest when approximating coalescent likelihoods. The chief novelties of our method are that it is able to infer rates of recombination and mutation, as well as the effective population size, while handling sampling over different time points and missing data without extra computational difficulty. We apply our method to a data set of HIV-1, in which several hundred sequences were obtained from an infected individual at seven time points over 2 years. We find mutation rate and effective population size estimates to be comparable to those produced by the software BEAST. Additionally, our method is able to produce local recombination rate estimates. The software underlying our method, Coalescenator, is freely available. PMID:26857628
Using Modern Digital Photography Tools to Guide Management Decisions on Forested Land
ERIC Educational Resources Information Center
Craft, Brandon; Barlow, Rebecca; Kush, John; Hemard, Charles
2016-01-01
Forestland management depends on assessing changes that occur over time. Long-term photo point monitoring is a low-cost method for documenting these changes. Using forestry as an example, this article highlights the idea that long-term photo point monitoring can be used to improve many types of land management decision making. Guidance on…
Optimization of cutting parameters for machining time in turning process
NASA Astrophysics Data System (ADS)
Mavliutov, A. R.; Zlotnikov, E. G.
2018-03-01
This paper describes the most effective methods for nonlinear constraint optimization of cutting parameters in the turning process. Among them are Linearization Programming Method with Dual-Simplex algorithm, Interior Point method, and Augmented Lagrangian Genetic Algorithm (ALGA). Every each of them is tested on an actual example – the minimization of production rate in turning process. The computation was conducted in the MATLAB environment. The comparative results obtained from the application of these methods show: The optimal value of the linearized objective and the original function are the same. ALGA gives sufficiently accurate values, however, when the algorithm uses the Hybrid function with Interior Point algorithm, the resulted values have the maximal accuracy.
An analytic data analysis method for oscillatory slug tests.
Chen, Chia-Shyun
2006-01-01
An analytical data analysis method is developed for slug tests in partially penetrating wells in confined or unconfined aquifers of high hydraulic conductivity. As adapted from the van der Kamp method, the determination of the hydraulic conductivity is based on the occurrence times and the displacements of the extreme points measured from the oscillatory data and their theoretical counterparts available in the literature. This method is applied to two sets of slug test response data presented by Butler et al.: one set shows slow damping with seven discernable extremities, and the other shows rapid damping with three extreme points. The estimates of the hydraulic conductivity obtained by the analytic method are in good agreement with those determined by an available curve-matching technique.
NASA Astrophysics Data System (ADS)
Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.
2017-12-01
In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.
Validity of Willingness to Pay Measures under Preference Uncertainty.
Braun, Carola; Rehdanz, Katrin; Schmidt, Ulrich
2016-01-01
Recent studies in the marketing literature developed a new method for eliciting willingness to pay (WTP) with an open-ended elicitation format: the Range-WTP method. In contrast to the traditional approach of eliciting WTP as a single value (Point-WTP), Range-WTP explicitly allows for preference uncertainty in responses. The aim of this paper is to apply Range-WTP to the domain of contingent valuation and to test for its theoretical validity and robustness in comparison to the Point-WTP. Using data from two novel large-scale surveys on the perception of solar radiation management (SRM), a little-known technique for counteracting climate change, we compare the performance of both methods in the field. In addition to the theoretical validity (i.e. the degree to which WTP values are consistent with theoretical expectations), we analyse the test-retest reliability and stability of our results over time. Our evidence suggests that the Range-WTP method clearly outperforms the Point-WTP method.
Validity of Willingness to Pay Measures under Preference Uncertainty
Braun, Carola; Rehdanz, Katrin; Schmidt, Ulrich
2016-01-01
Recent studies in the marketing literature developed a new method for eliciting willingness to pay (WTP) with an open-ended elicitation format: the Range-WTP method. In contrast to the traditional approach of eliciting WTP as a single value (Point-WTP), Range-WTP explicitly allows for preference uncertainty in responses. The aim of this paper is to apply Range-WTP to the domain of contingent valuation and to test for its theoretical validity and robustness in comparison to the Point-WTP. Using data from two novel large-scale surveys on the perception of solar radiation management (SRM), a little-known technique for counteracting climate change, we compare the performance of both methods in the field. In addition to the theoretical validity (i.e. the degree to which WTP values are consistent with theoretical expectations), we analyse the test-retest reliability and stability of our results over time. Our evidence suggests that the Range-WTP method clearly outperforms the Point-WTP method. PMID:27096163
Thickness noise of a propeller and its relation to blade sweep
NASA Astrophysics Data System (ADS)
Amiet, R. K.
1988-07-01
Linear acoustic theory is used to determine the thickness noise produced by a supersonic propeller with sharp leading and trailing edges. The method reveals details of the calculated waveform. Abrupt changes of slope in the pressure-time waveform which are produced by singular points entering or leaving the tip blade are pointed out. It is found that the behavior of the pressure-time waveform is closely related to changes in the retarded rotor shape. The results indicate that logarithmic singularities in the waveform are produced by regions on the blade edges that move towards the observer at sonic speed, with the edge normal to the line joining the source point and the observer.
Effect of distance-related heterogeneity on population size estimates from point counts
Efford, Murray G.; Dawson, Deanna K.
2009-01-01
Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.
Platelets to rings: Influence of sodium dodecyl sulfate on Zn-Al layered double hydroxide morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Ceren; Unal, Ugur; Koc University, Chemistry Department, Rumelifeneri yolu, Sariyer 34450, Istanbul
2012-03-15
In the current study, influence of sodium dodecyl sulfate (SDS) on the crystallization of Zn-Al layered double hydroxide (LDH) was investigated. Depending on the SDS concentration coral-like and for the first time ring-like morphologies were obtained in a urea-hydrolysis method. It was revealed that the surfactant level in the starting solution plays an important role in the morphology. Concentration of surfactant equal to or above the anion exchange capacity of the LDH is influential in creating different morphologies. Another important parameter was the critical micelle concentration (CMC) of the surfactant. Surfactant concentrations well above CMC value resulted in ring-like structures.more » The crystallization mechanism was discussed. - Graphical abstract: Dependence of ZnAl LDH Morphology on SDS concentration. Highlights: Black-Right-Pointing-Pointer In-situ intercalation of SDS in ZnAl LDH was achieved via urea hydrolysis method. Black-Right-Pointing-Pointer Morphology of ZnAl LDH intercalated with SDS depended on the SDS concentration. Black-Right-Pointing-Pointer Ring like morphology for SDS intercalated ZnAl LDH was obtained for the first time. Black-Right-Pointing-Pointer Growth mechanism was discussed. Black-Right-Pointing-Pointer Template assisted growth of ZnAl LDH was proposed.« less
A new method for automated discontinuity trace mapping on rock mass 3D surface model
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Chen, Jianqin; Zhu, Hehua
2016-04-01
This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.
NASA Technical Reports Server (NTRS)
Palmer, Grant; Venkatapathy, Ethiraj
1993-01-01
Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
NASA Astrophysics Data System (ADS)
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-06-17
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-01-01
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279
Time-series analysis of foreign exchange rates using time-dependent pattern entropy
NASA Astrophysics Data System (ADS)
Ishizaki, Ryuji; Inoue, Masayoshi
2013-08-01
Time-dependent pattern entropy is a method that reduces variations to binary symbolic dynamics and considers the pattern of symbols in a sliding temporal window. We use this method to analyze the instability of daily variations in foreign exchange rates, in particular, the dollar-yen rate. The time-dependent pattern entropy of the dollar-yen rate was found to be high in the following periods: before and after the turning points of the yen from strong to weak or from weak to strong, and the period after the Lehman shock.
NASA Astrophysics Data System (ADS)
Brandt, C.; Thakur, S. C.; Tynan, G. R.
2016-04-01
Complexities of flow patterns in the azimuthal cross-section of a cylindrical magnetized helicon plasma and the corresponding plasma dynamics are investigated by means of a novel scheme for time delay estimation velocimetry. The advantage of this introduced method is the capability of calculating the time-averaged 2D velocity fields of propagating wave-like structures and patterns in complex spatiotemporal data. It is able to distinguish and visualize the details of simultaneously present superimposed entangled dynamics and it can be applied to fluid-like systems exhibiting frequently repeating patterns (e.g., waves in plasmas, waves in fluids, dynamics in planetary atmospheres, etc.). The velocity calculations are based on time delay estimation obtained from cross-phase analysis of time series. Each velocity vector is unambiguously calculated from three time series measured at three different non-collinear spatial points. This method, when applied to fast imaging, has been crucial to understand the rich plasma dynamics in the azimuthal cross-section of a cylindrical linear magnetized helicon plasma. The capabilities and the limitations of this velocimetry method are discussed and demonstrated for two completely different plasma regimes, i.e., for quasi-coherent wave dynamics and for complex broadband wave dynamics involving simultaneously present multiple instabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, C.; Max-Planck-Institute for Plasma Physics, Wendelsteinstr. 1, D-17491 Greifswald; Thakur, S. C.
2016-04-15
Complexities of flow patterns in the azimuthal cross-section of a cylindrical magnetized helicon plasma and the corresponding plasma dynamics are investigated by means of a novel scheme for time delay estimation velocimetry. The advantage of this introduced method is the capability of calculating the time-averaged 2D velocity fields of propagating wave-like structures and patterns in complex spatiotemporal data. It is able to distinguish and visualize the details of simultaneously present superimposed entangled dynamics and it can be applied to fluid-like systems exhibiting frequently repeating patterns (e.g., waves in plasmas, waves in fluids, dynamics in planetary atmospheres, etc.). The velocity calculationsmore » are based on time delay estimation obtained from cross-phase analysis of time series. Each velocity vector is unambiguously calculated from three time series measured at three different non-collinear spatial points. This method, when applied to fast imaging, has been crucial to understand the rich plasma dynamics in the azimuthal cross-section of a cylindrical linear magnetized helicon plasma. The capabilities and the limitations of this velocimetry method are discussed and demonstrated for two completely different plasma regimes, i.e., for quasi-coherent wave dynamics and for complex broadband wave dynamics involving simultaneously present multiple instabilities.« less
Klingbeil, Brian T; Willig, Michael R
2015-01-01
Effective monitoring programs for biodiversity are needed to assess trends in biodiversity and evaluate the consequences of management. This is particularly true for birds and faunas that occupy interior forest and other areas of low human population density, as these are frequently under-sampled compared to other habitats. For birds, Autonomous Recording Units (ARUs) have been proposed as a supplement or alternative to point counts made by human observers to enhance monitoring efforts. We employed two strategies (i.e., simultaneous-collection and same-season) to compare point count and ARU methods for quantifying species richness and composition of birds in temperate interior forests. The simultaneous-collection strategy compares surveys by ARUs and point counts, with methods matched in time, location, and survey duration such that the person and machine simultaneously collect data. The same-season strategy compares surveys from ARUs and point counts conducted at the same locations throughout the breeding season, but methods differ in the number, duration, and frequency of surveys. This second strategy more closely follows the ways in which monitoring programs are likely to be implemented. Site-specific estimates of richness (but not species composition) differed between methods; however, the nature of the relationship was dependent on the assessment strategy. Estimates of richness from point counts were greater than estimates from ARUs in the simultaneous-collection strategy. Woodpeckers in particular, were less frequently identified from ARUs than point counts with this strategy. Conversely, estimates of richness were lower from point counts than ARUs in the same-season strategy. Moreover, in the same-season strategy, ARUs detected the occurrence of passerines at a higher frequency than did point counts. Differences between ARU and point count methods were only detected in site-level comparisons. Importantly, both methods provide similar estimates of species richness and composition for the region. Consequently, if single visits to sites or short-term monitoring are the goal, point counts will likely perform better than ARUs, especially if species are rare or vocalize infrequently. However, if seasonal or annual monitoring of sites is the goal, ARUs offer a viable alternative to standard point-count methods, especially in the context of large-scale or long-term monitoring of temperate forest birds.
Applied Time Domain Stability Margin Assessment for Nonlinear Time-Varying Systems
NASA Technical Reports Server (NTRS)
Kiefer, J. M.; Johnson, M. D.; Wall, J. H.; Dominguez, A.
2016-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation. This technique was implemented by using the Stability Aerospace Vehicle Analysis Tool (SAVANT) computer simulation to evaluate the stability of the SLS system with the Adaptive Augmenting Control (AAC) active and inactive along its ascent trajectory. The gains for which the vehicle maintains apparent time-domain stability defines the gain margins, and the time delay similarly defines the phase margin. This method of extracting the control stability margins from the time-domain simulation is relatively straightforward and the resultant margins can be compared to the linearized system results. The sections herein describe the techniques employed to extract the time-domain margins, compare the results between these nonlinear and the linear methods, and provide explanations for observed discrepancies. The SLS ascent trajectory was simulated with SAVANT and the classical linear stability margins were evaluated at one second intervals. The linear analysis was performed with the AAC algorithm disabled to attain baseline stability margins. At each time point, the system was linearized about the current operating point using Simulink's built-in solver. Each linearized system in time was evaluated for its rigid-body gain margin (high frequency gain margin), rigid-body phase margin, and aero gain margin (low frequency gain margin) for each control axis. Using the stability margins derived from the baseline linearization approach, the time domain derived stability margins were determined by executing time domain simulations in which axis-specific incremental gain and phase adjustments were made to the nominal system about the expected neutral stability point at specific flight times. The baseline stability margin time histories were used to shift the system gain to various values around the zero margin point such that a precise amount of expected gain margin was maintained throughout flight. When assessing the gain margins, the gain was applied starting at the time point under consideration, thereafter following the variation in the margin found in the linear analysis. When assessing the rigid-body phase margin, a constant time delay was applied to the system starting at the time point under consideration. If the baseline stability margins were correctly determined via the linear analysis, the time domain simulation results should contain unstable behavior at certain gain and phase values. Examples will be shown from repeated simulations with variable added gain and phase lag. Faithfulness of margins calculated from the linear analysis to the nonlinear system will be demonstrated.
Height Accuracy Based on Different Rtk GPS Method for Ultralight Aircraft Images
NASA Astrophysics Data System (ADS)
Tahar, K. N.
2015-08-01
Height accuracy is one of the important elements in surveying work especially for control point's establishment which requires an accurate measurement. There are many methods can be used to acquire height value such as tacheometry, leveling and Global Positioning System (GPS). This study has investigated the effect on height accuracy based on different observations which are single based and network based GPS methods. The GPS network is acquired from the local network namely Iskandar network. This network has been setup to provide real-time correction data to rover GPS station while the single network is based on the known GPS station. Nine ground control points were established evenly at the study area. Each ground control points were observed about two and ten minutes. It was found that, the height accuracy give the different result for each observation.
Time-Domain Filtering for Spatial Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. David
1997-01-01
An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.
Heuristic Modeling for TRMM Lifetime Predictions
NASA Technical Reports Server (NTRS)
Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.
1996-01-01
Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.
de Senneville, Baudouin Denis; Mougenot, Charles; Moonen, Chrit T W
2007-02-01
Focused ultrasound (US) is a unique and noninvasive technique for local deposition of thermal energy deep inside the body. MRI guidance offers the additional benefits of excellent target visualization and continuous temperature mapping. However, treating a moving target poses severe problems because 1) motion-related thermometry artifacts must be corrected, 2) the US focal point must be relocated according to the target displacement. In this paper a complete MRI-compatible, high-intensity focused US (HIFU) system is described together with adaptive methods that allow continuous MR thermometry and therapeutic US with real-time tracking of a moving target, online motion correction of the thermometry maps, and regional temperature control based on the proportional, integral, and derivative method. The hardware is based on a 256-element phased-array transducer with rapid electronic displacement of the focal point. The exact location of the target during US firing is anticipated using automatic analysis of periodic motions. The methods were tested with moving phantoms undergoing either rigid body or elastic periodical motions. The results show accurate tracking of the focal point. Focal and regional temperature control is demonstrated with a performance similar to that obtained with stationary phantoms. Copyright (c) 2007 Wiley-Liss, Inc.
Methods to compute reliabilities for genomic predictions of feed intake
USDA-ARS?s Scientific Manuscript database
For new traits without historical reference data, cross-validation is often the preferred method to validate reliability (REL). Time truncation is less useful because few animals gain substantial REL after the truncation point. Accurate cross-validation requires separating genomic gain from pedigree...
Trägårdh, Johanna; Gersen, Henkjan
2013-07-15
We show how a combination of near-field scanning optical microscopy with crossed beam spectral interferometry allows a local measurement of the spectral phase and amplitude of light propagating in photonic structures. The method only requires measurement at the single point of interest and at a reference point, to correct for the relative phase of the interferometer branches, to retrieve the dispersion properties of the sample. Furthermore, since the measurement is performed in the spectral domain, the spectral phase and amplitude could be retrieved from a single camera frame, here in 70 ms for a signal power of less than 100 pW limited by the dynamic range of the 8-bit camera. The method is substantially faster than most previous time-resolved NSOM methods that are based on time-domain interferometry, which also reduced problems with drift. We demonstrate how the method can be used to measure the refractive index and group velocity in a waveguide structure.
NASA Astrophysics Data System (ADS)
Cheng, Jun; Zhang, Jun; Tian, Jinwen
2015-12-01
Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.
Applications of the gambling score in evaluating earthquake predictions and forecasts
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe
2010-05-01
This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.
Determinant Computation on the GPU using the Condensation Method
NASA Astrophysics Data System (ADS)
Anisul Haque, Sardar; Moreno Maza, Marc
2012-02-01
We report on a GPU implementation of the condensation method designed by Abdelmalek Salem and Kouachi Said for computing the determinant of a matrix. We consider two types of coefficients: modular integers and floating point numbers. We evaluate the performance of our code by measuring its effective bandwidth and argue that it is numerical stable in the floating point number case. In addition, we compare our code with serial implementation of determinant computation from well-known mathematical packages. Our results suggest that a GPU implementation of the condensation method has a large potential for improving those packages in terms of running time and numerical stability.
NASA Astrophysics Data System (ADS)
Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro
2017-07-01
Numerical analysis on the rotation of an ultrasonically levitated droplet in centrifugal coordinate is discussed. A droplet levitated in an acoustic chamber is simulated using the distributed point source method and the moving particle semi-implicit method. Centrifugal coordinate is adopted to avoid the Laplacian differential error, which causes numerical divergence or inaccuracy in the global coordinate calculation. Consequently, the duration of calculation stability has increased 30 times longer than that in a the previous paper. Moreover, the droplet radius versus rotational acceleration characteristics show a similar trend to the theoretical and experimental values in the literature.
Ren, Qiang; Nagar, Jogender; Kang, Lei; Bian, Yusheng; Werner, Ping; Werner, Douglas H
2017-05-18
A highly efficient numerical approach for simulating the wideband optical response of nano-architectures comprised of Drude-Critical Points (DCP) media (e.g., gold and silver) is proposed and validated through comparing with commercial computational software. The kernel of this algorithm is the subdomain level discontinuous Galerkin time domain (DGTD) method, which can be viewed as a hybrid of the spectral-element time-domain method (SETD) and the finite-element time-domain (FETD) method. An hp-refinement technique is applied to decrease the Degrees-of-Freedom (DoFs) and computational requirements. The collocated E-J scheme facilitates solving the auxiliary equations by converting the inversions of matrices to simpler vector manipulations. A new hybrid time stepping approach, which couples the Runge-Kutta and Newmark methods, is proposed to solve the temporal auxiliary differential equations (ADEs) with a high degree of efficiency. The advantages of this new approach, in terms of computational resource overhead and accuracy, are validated through comparison with well-known commercial software for three diverse cases, which cover both near-field and far-field properties with plane wave and lumped port sources. The presented work provides the missing link between DCP dispersive models and FETD and/or SETD based algorithms. It is a competitive candidate for numerically studying the wideband plasmonic properties of DCP media.
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
Approximation methods for combined thermal/structural design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Shore, C. P.
1979-01-01
Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.
NASA Astrophysics Data System (ADS)
Sudarsono, Anugrah S.; Merthayasa, I. G. N.; Suprijanto
2015-09-01
This research tried to compare psycho-acoustics and Physio-acoustic measurement to find the optimum reverberation time of soundfield from angklung music. Psycho-acoustic measurement was conducted using a paired comparison method and Physio-acoustic measurement was conducted with EEG Measurement on T3, T4, FP1, and FP2 measurement points. EEG measurement was conducted with 5 persons. Pentatonic angklung music was used as a stimulus with reverberation time variation. The variation was between 0.8 s - 1.6 s with 0.2 s step. EEG signal was analysed using a Power Spectral Density method on Alpha Wave, High Alpha Wave, and Theta Wave. Psycho-acoustic measurement on 50 persons showed that reverberation time preference of pentatonic angklung music was 1.2 second. The result was similar to Theta Wave measurement on FP2 measurement point. High Alpha wave on T4 measurement gave different results, but had similar patterns with psycho-acoustic measurement
Real-time depth camera tracking with geometrically stable weight algorithm
NASA Astrophysics Data System (ADS)
Fu, Xingyin; Zhu, Feng; Qi, Feng; Wang, Mingming
2017-03-01
We present an approach for real-time camera tracking with depth stream. Existing methods are prone to drift in sceneries without sufficient geometric information. First, we propose a new weight method for an iterative closest point algorithm commonly used in real-time dense mapping and tracking systems. By detecting uncertainty in pose and increasing weight of points that constrain unstable transformations, our system achieves accurate and robust trajectory estimation results. Our pipeline can be fully parallelized with GPU and incorporated into the current real-time depth camera tracking system seamlessly. Second, we compare the state-of-the-art weight algorithms and propose a weight degradation algorithm according to the measurement characteristics of a consumer depth camera. Third, we use Nvidia Kepler Shuffle instructions during warp and block reduction to improve the efficiency of our system. Results on the public TUM RGB-D database benchmark demonstrate that our camera tracking system achieves state-of-the-art results both in accuracy and efficiency.
[Point of Care 2.0: Coagulation Monitoring Using Rotem® Sigma and Teg® 6S].
Weber, Christian Friedrich; Zacharowski, Kai
2018-06-01
New-generation methods for point of care based coagulation monitoring enable fully automated viscoelastic analyses for the assessment of particular parts of hemostasis. Contrary to the measuring techniques of former models, the viscoelastic ROTEM ® sigma and TEG ® 6s analyses are performed in single-use test cartridges without time- and personnel-intensive pre-analytical procedures. This review highlights methodical strengths and limitations of the devices and meets concerns associated with their integration in routine clinical practice. Georg Thieme Verlag KG Stuttgart · New York.
1984-12-01
with more gene - ral arrival process or service-time distribution, with attention restricted to critical-number policies, have been studied by Adler...points of their respective intervals. This method proved quite satisfactory for df’s that possess no sharp peaks or severe skewness. The method is...then (animal is carnivore))) (rule id6 (if (animal has pointed teeth ) (animal has claws) (animal has forward eyes)) (then (animal is carnivore
NASA Astrophysics Data System (ADS)
Wähmer, M.; Anhalt, K.; Hollandt, J.; Klein, R.; Taubert, R. D.; Thornagel, R.; Ulm, G.; Gavrilov, V.; Grigoryeva, I.; Khlevnoy, B.; Sapritsky, V.
2017-10-01
Absolute spectral radiometry is currently the only established primary thermometric method for the temperature range above 1300 K. Up to now, the ongoing improvements of high-temperature fixed points and their formal implementation into an improved temperature scale with the mise en pratique for the definition of the kelvin, rely solely on single-wavelength absolute radiometry traceable to the cryogenic radiometer. Two alternative primary thermometric methods, yielding comparable or possibly even smaller uncertainties, have been proposed in the literature. They use ratios of irradiances to determine the thermodynamic temperature traceable to blackbody radiation and synchrotron radiation. At PTB, a project has been established in cooperation with VNIIOFI to use, for the first time, all three methods simultaneously for the determination of the phase transition temperatures of high-temperature fixed points. For this, a dedicated four-wavelengths ratio filter radiometer was developed. With all three thermometric methods performed independently and in parallel, we aim to compare the potential and practical limitations of all three methods, disclose possibly undetected systematic effects of each method and thereby confirm or improve the previous measurements traceable to the cryogenic radiometer. This will give further and independent confidence in the thermodynamic temperature determination of the high-temperature fixed point's phase transitions.
Sampayan, Stephen E.
2016-11-22
Apparatus, systems, and methods that provide an X-ray interrogation system having a plurality of stationary X-ray point sources arranged to substantially encircle an area or space to be interrogated. A plurality of stationary detectors are arranged to substantially encircle the area or space to be interrogated, A controller is adapted to control the stationary X-ray point sources to emit X-rays one at a time, and to control the stationary detectors to detect the X-rays emitted by the stationary X-ray point sources.
Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor
NASA Astrophysics Data System (ADS)
Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul
2017-05-01
Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.
NASA Astrophysics Data System (ADS)
Gupta, S.; Lohani, B.
2014-05-01
Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
Lee, Clara; Bolck, Jan; Naguib, Nagy N.N.; Schulz, Boris; Eichler, Katrin; Aschenbach, Rene; Wichmann, Julian L.; Vogl, Thomas. J.; Zangos, Stephan
2015-01-01
Objective To investigate the accuracy, efficiency and radiation dose of a novel laser navigation system (LNS) compared to those of free-handed punctures on computed tomography (CT). Materials and Methods Sixty punctures were performed using a phantom body to compare accuracy, timely effort, and radiation dose of the conventional free-handed procedure to those of the LNS-guided method. An additional 20 LNS-guided interventions were performed on another phantom to confirm accuracy. Ten patients subsequently underwent LNS-guided punctures. Results The phantom 1-LNS group showed a target point accuracy of 4.0 ± 2.7 mm (freehand, 6.3 ± 3.6 mm; p = 0.008), entrance point accuracy of 0.8 ± 0.6 mm (freehand, 6.1 ± 4.7 mm), needle angulation accuracy of 1.3 ± 0.9° (freehand, 3.4 ± 3.1°; p < 0.001), intervention time of 7.03 ± 5.18 minutes (freehand, 8.38 ± 4.09 minutes; p = 0.006), and 4.2 ± 3.6 CT images (freehand, 7.9 ± 5.1; p < 0.001). These results show significant improvement in 60 punctures compared to freehand. The phantom 2-LNS group showed a target point accuracy of 3.6 ± 2.5 mm, entrance point accuracy of 1.4 ± 2.0 mm, needle angulation accuracy of 1.0 ± 1.2°, intervention time of 1.44 ± 0.22 minutes, and 3.4 ± 1.7 CT images. The LNS group achieved target point accuracy of 5.0 ± 1.2 mm, entrance point accuracy of 2.0 ± 1.5 mm, needle angulation accuracy of 1.5 ± 0.3°, intervention time of 12.08 ± 3.07 minutes, and used 5.7 ± 1.6 CT-images for the first experience with patients. Conclusion Laser navigation system improved accuracy, duration of intervention, and radiation dose of CT-guided interventions. PMID:26175571
Development of a short version of the modified Yale Preoperative Anxiety Scale.
Jenkins, Brooke N; Fortier, Michelle A; Kaplan, Sherrie H; Mayes, Linda C; Kain, Zeev N
2014-09-01
The modified Yale Preoperative Anxiety Scale (mYPAS) is the current "criterion standard" for assessing child anxiety during induction of anesthesia and has been used in >100 studies. This observational instrument covers 5 items and is typically administered at 4 perioperative time points. Application of this complex instrument in busy operating room (OR) settings, however, presents a challenge. In this investigation, we examined whether the instrument could be modified and made easier to use in OR settings. This study used qualitative methods, principal component analyses, Cronbach αs, and effect sizes to create the mYPAS-Short Form (mYPAS-SF) and reduce time points of assessment. Data were obtained from multiple patients (N = 3798; Mage = 5.63) who were recruited in previous investigations using the mYPAS over the past 15 years. After qualitative analysis, the "use of parent" item was eliminated due to content overlap with other items. The reduced item set accounted for 82% or more of the variance in child anxiety and produced the Cronbach α of at least 0.92. To reduce the number of time points of assessment, a minimum Cohen d effect size criterion of 0.48 change in mYPAS score across time points was used. This led to eliminating the walk to the OR and entrance to the OR time points. Reducing the mYPAS to 4 items, creating the mYPAS-SF that can be administered at 2 time points, retained the accuracy of the measure while allowing the instrument to be more easily used in clinical research settings.
NASA Astrophysics Data System (ADS)
Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing
2009-07-01
Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
Intraoperative measurements on the mitral apparatus using optical tracking: a feasibility study
NASA Astrophysics Data System (ADS)
Engelhardt, Sandy; De Simone, Raffaele; Wald, Diana; Zimmermann, Norbert; Al Maisary, Sameer; Beller, Carsten J.; Karck, Matthias; Meinzer, Hans-Peter; Wolf, Ivo
2014-03-01
Mitral valve reconstruction is a widespread surgical method to repair incompetent mitral valves. During reconstructive surgery the judgement of mitral valve geometry and subvalvular apparatus is mandatory in order to choose for the appropriate repair strategy. To date, intraoperative analysis of mitral valve is merely based on visual assessment and inaccurate sizer devices, which do not allow for any accurate and standardized measurement of the complex three-dimensional anatomy. We propose a new intraoperative computer-assisted method for mitral valve measurements using a pointing instrument together with an optical tracking system. Sixteen anatomical points were defined on the mitral apparatus. The feasibility and the reproducibility of the measurements have been tested on a rapid prototyping (RP) heart model and a freshly exercised porcine heart. Four heart surgeons repeated the measurements three times on each heart. Morphologically important distances between the measured points are calculated. We achieved an interexpert variability mean of 2.28 +/- 1:13 mm for the 3D-printed heart and 2.45 +/- 0:75 mm for the porcine heart. The overall time to perform a complete measurement is 1-2 minutes, which makes the method viable for virtual annuloplasty during an intervention.
Curve Set Feature-Based Robust and Fast Pose Estimation Algorithm
Hashimoto, Koichi
2017-01-01
Bin picking refers to picking the randomly-piled objects from a bin for industrial production purposes, and robotic bin picking is always used in automated assembly lines. In order to achieve a higher productivity, a fast and robust pose estimation algorithm is necessary to recognize and localize the randomly-piled parts. This paper proposes a pose estimation algorithm for bin picking tasks using point cloud data. A novel descriptor Curve Set Feature (CSF) is proposed to describe a point by the surface fluctuation around this point and is also capable of evaluating poses. The Rotation Match Feature (RMF) is proposed to match CSF efficiently. The matching process combines the idea of the matching in 2D space of origin Point Pair Feature (PPF) algorithm with nearest neighbor search. A voxel-based pose verification method is introduced to evaluate the poses and proved to be more than 30-times faster than the kd-tree-based verification method. Our algorithm is evaluated against a large number of synthetic and real scenes and proven to be robust to noise, able to detect metal parts, more accurately and more than 10-times faster than PPF and Oriented, Unique and Repeatable (OUR)-Clustered Viewpoint Feature Histogram (CVFH). PMID:28771216
A star recognition method based on the Adaptive Ant Colony algorithm for star sensors.
Quan, Wei; Fang, Jiancheng
2010-01-01
A new star recognition method based on the Adaptive Ant Colony (AAC) algorithm has been developed to increase the star recognition speed and success rate for star sensors. This method draws circles, with the center of each one being a bright star point and the radius being a special angular distance, and uses the parallel processing ability of the AAC algorithm to calculate the angular distance of any pair of star points in the circle. The angular distance of two star points in the circle is solved as the path of the AAC algorithm, and the path optimization feature of the AAC is employed to search for the optimal (shortest) path in the circle. This optimal path is used to recognize the stellar map and enhance the recognition success rate and speed. The experimental results show that when the position error is about 50″, the identification success rate of this method is 98% while the Delaunay identification method is only 94%. The identification time of this method is up to 50 ms.
Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.
2006-01-01
We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media, Inc.
Calculation of Temperature Rise in Calorimetry.
ERIC Educational Resources Information Center
Canagaratna, Sebastian G.; Witt, Jerry
1988-01-01
Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crow, A J
2009-07-07
Andrew Crow arrived at Lawrence Livermore National Laboratory with the intention of continuing work on the Complex Particle Kinetic (CPK) method developed D. Larson and D. Hewett. Andrew Crow had previously worked on duplicating the results of D. Hewett in his previous work. Since arrival, A. Crow has been working with D. Larson on a slightly different project. The current method, still under development, is a Particle in Cell (PIC) code with the following features: (1) all particles begin each timestep at a gridpoint; (2) particles are then advanced in time using a standard special advancement method. The exact methodmore » has not been decided upon, but there are many reliable methods from which to choose. (3) All particles within each cell undergo a simultaneous implicit collision step. This is the current area of focus. Currently, A. Crow is not aware of any method of performing implicit collisions over a large number of charged particles. Implicit methods for charged particle movement and electron-electron collisions, have been developed. The work of L. pareschi and G. Russo on the Time Relaxed Direct Simulation Monte Carlo method, also appears to be a good basis for implicit particle collisions. (4) Each individual particle will be divided into a set of particles with a Gaussian velocity distribution. This will collect some of the thermal effects created by the collisions. This algorithm has not been created. (5) Particles will be projected on to the grid points. Currently, a linear weighting technique is intended to be used, but has not settled upon. (6) Once on the gridpoints the particle number will be reduced using a set of quadrature points based on the third order velocity moments of the particles. The method proposed by R. Fox has been programmed and shown to conserve energy, momentum and mass to machine precision. In addition to reducing the number of particles this method will work to quiet the simulation it will behave as a higher order version of the Quiet DSMC method proposed by B. Albright et al. (7) These quadrature points then become the new particles for the next timestep. the advantage of this method can be many: The self force on ions can be easily removed since all particles begin on grid points. The size of the timesteps should not be limited by collision rate, and should only be impacted by particle travel time through the cell. The particle reduction technique should keep many of the higher order features of the particle distribution while reducing the number of particles in the system. It should also quite the variance in the system. The two largest unknowns, at this time are, how large a part numerical diffusion will play in the scheme and how computationally expensive each timestep will be.« less
Dynamic connectivity regression: Determining state-related changes in brain connectivity
Cribben, Ivor; Haraldsdottir, Ragnheidur; Atlas, Lauren Y.; Wager, Tor D.; Lindquist, Martin A.
2014-01-01
Most statistical analyses of fMRI data assume that the nature, timing and duration of the psychological processes being studied are known. However, often it is hard to specify this information a priori. In this work we introduce a data-driven technique for partitioning the experimental time course into distinct temporal intervals with different multivariate functional connectivity patterns between a set of regions of interest (ROIs). The technique, called Dynamic Connectivity Regression (DCR), detects temporal change points in functional connectivity and estimates a graph, or set of relationships between ROIs, for data in the temporal partition that falls between pairs of change points. Hence, DCR allows for estimation of both the time of change in connectivity and the connectivity graph for each partition, without requiring prior knowledge of the nature of the experimental design. Permutation and bootstrapping methods are used to perform inference on the change points. The method is applied to various simulated data sets as well as to an fMRI data set from a study (N=26) of a state anxiety induction using a socially evaluative threat challenge. The results illustrate the method’s ability to observe how the networks between different brain regions changed with subjects’ emotional state. PMID:22484408
Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun
2018-05-17
This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
High-order time-marching reinitialization for regional level-set functions
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-02-01
In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.
Mijakoski, Dragan; Karadzhinska-Bislimovska, Jovanka; Stoleski, Sasho; Minov, Jordan; Atanasovska, Aneta; Bihorac, Elida
2018-01-01
AIM: The purpose of the paper was to assess job demands, burnout, and teamwork in healthcare professionals (HPs) working in a general hospital that was analysed at two points in time with a time lag of three years. METHODS: Time 1 respondents (N = 325) were HPs who participated during the first wave of data collection (2011). Time 2 respondents (N = 197) were HPs from the same hospital who responded at Time 2 (2014). Job demands, burnout, and teamwork were measured with Hospital Experience Scale, Maslach Burnout Inventory, and Hospital Survey on Patient Safety Culture, respectively. RESULTS: Significantly higher scores of emotional exhaustion (21.03 vs. 15.37, t = 5.1, p < 0.001), depersonalization (4.48 vs. 2.75, t = 3.8, p < 0.001), as well as organizational (2.51 vs. 2.34, t = 2.38, p = 0.017), emotional (2.46 vs. 2.25, t = 3.68, p < 0.001), and cognitive (2.82 vs. 2.64, t = 2.68, p = 0.008) job demands were found at Time 2. Teamwork levels were similar at both points in time (Time 1 = 3.84 vs. Time 2 = 3.84, t = 0.043, p = 0.97). CONCLUSION: Actual longitudinal study revealed significantly higher mean values of emotional exhaustion and depersonalization in 2014 that could be explained by significantly increased job demands between analysed points in time. PMID:29731948
Ostrowski, M; Paulevé, L; Schaub, T; Siegel, A; Guziolowski, C
2016-11-01
Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Liu, Derek; Sloboda, Ron S
2014-05-01
Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu
2016-02-15
A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Sourcemore » Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.« less
Lab-on-a-chip nucleic-acid analysis towards point-of-care applications
NASA Astrophysics Data System (ADS)
Kopparthy, Varun Lingaiah
Recent infectious disease outbreaks, such as Ebola in 2013, highlight the need for fast and accurate diagnostic tools to combat the global spread of the disease. Detection and identification of the disease-causing viruses and bacteria at the genetic level is required for accurate diagnosis of the disease. Nucleic acid analysis systems have shown promise in identifying diseases such as HIV, anthrax, and Ebola in the past. Conventional nucleic acid analysis systems are still time consuming, and are not suitable for point-ofcare applications. Miniaturized nucleic acid systems has shown great promise for rapid analysis, but they have not been commercialized due to several factors such as footprint, complexity, portability, and power consumption. This dissertation presents the development of technologies and methods for a labon-a-chip nucleic acid analysis towards point-of-care applications. An oscillatory-flow PCR methodology in a thermal gradient is developed which provides real-time analysis of nucleic-acid samples. Oscillating flow PCR was performed in the microfluidic device under thermal gradient in 40 minutes. Reverse transcription PCR (RT-PCR) was achieved in the system without an additional heating element for incubation to perform reverse transcription step. A novel method is developed for the simultaneous pattering and bonding of all-glass microfluidic devices in a microwave oven. Glass microfluidic devices were fabricated in less than 4 minutes. Towards an integrated system for the detection of amplified products, a thermal sensing method is studied for the optimization of the sensor output. Calorimetric sensing method is characterized to identify design considerations and optimal parameters such as placement of the sensor, steady state response, and flow velocity for improved performance. An understanding of these developed technologies and methods will facilitate the development of lab-on-a-chip systems for point-of-care analysis.
Development of Control System for Hydrolysis Crystallization Process
NASA Astrophysics Data System (ADS)
Wan, Feng; Shi, Xiao-Ming; Feng, Fang-Fang
2016-05-01
Sulfate method for producing titanium dioxide is commonly used in China, but the determination of crystallization time is artificially which leads to a big error and is harmful to the operators. In this paper a new method for determining crystallization time is proposed. The method adopts the red laser as the light source, uses the silicon photocell as reflection light receiving component, using optical fiber as the light transmission element, differential algorithm is adopted in the software to realize the determination of the crystallizing time. The experimental results show that the method can realize the determination of crystallization point automatically and accurately, can replace manual labor and protect the health of workers, can be applied to practice completely.
Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation
NASA Astrophysics Data System (ADS)
Jamelot, Erell; Ciarlet, Patrick
2013-05-01
Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.
Determining Performance Acceptability of Electrochemical Oxygen Sensors
NASA Technical Reports Server (NTRS)
Gonzales, Daniel
2012-01-01
A method has been developed to screen commercial electrochemical oxygen sensors to reduce the failure rate. There are three aspects to the method: First, the sensitivity over time (several days) can be measured and the rate of change of the sensitivity can be used to predict sensor failure. Second, an improvement to this method would be to store the sensors in an oxygen-free (e.g., nitrogen) environment and intermittently measure the sensitivity over time (several days) to accomplish the same result while preserving the sensor lifetime by limiting consumption of the electrode. Third, the second time derivative of the sensor response over time can be used to determine the point in time at which the sensors are sufficiently stable for use.
High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis
Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher
2015-01-01
Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87 m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2 cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.
Fitting ordinary differential equations to short time course data.
Brewer, Daniel; Barenco, Martino; Callard, Robin; Hubank, Michael; Stark, Jaroslav
2008-02-28
Ordinary differential equations (ODEs) are widely used to model many systems in physics, chemistry, engineering and biology. Often one wants to compare such equations with observed time course data, and use this to estimate parameters. Surprisingly, practical algorithms for doing this are relatively poorly developed, particularly in comparison with the sophistication of numerical methods for solving both initial and boundary value problems for differential equations, and for locating and analysing bifurcations. A lack of good numerical fitting methods is particularly problematic in the context of systems biology where only a handful of time points may be available. In this paper, we present a survey of existing algorithms and describe the main approaches. We also introduce and evaluate a new efficient technique for estimating ODEs linear in parameters particularly suited to situations where noise levels are high and the number of data points is low. It employs a spline-based collocation scheme and alternates linear least squares minimization steps with repeated estimates of the noise-free values of the variables. This is reminiscent of expectation-maximization methods widely used for problems with nuisance parameters or missing data.
Kinect based real-time position calibration for nasal endoscopic surgical navigation system
NASA Astrophysics Data System (ADS)
Fan, Jingfan; Yang, Jian; Chu, Yakui; Ma, Shaodong; Wang, Yongtian
2016-03-01
Unanticipated, reactive motion of the patient during skull based tumor resective surgery is the source of the consequence that the nasal endoscopic tracking system is compelled to be recalibrated. To accommodate the calibration process with patient's movement, this paper developed a Kinect based Real-time positional calibration method for nasal endoscopic surgical navigation system. In this method, a Kinect scanner was employed as the acquisition part of the point cloud volumetric reconstruction of the patient's head during surgery. Then, a convex hull based registration algorithm aligned the real-time image of the patient head with a model built upon the CT scans performed in the preoperative preparation to dynamically calibrate the tracking system if a movement was detected. Experimental results confirmed the robustness of the proposed method, presenting a total tracking error within 1 mm under the circumstance of relatively violent motions. These results point out the tracking accuracy can be retained stably and the potential to expedite the calibration of the tracking system against strong interfering conditions, demonstrating high suitability for a wide range of surgical applications.
Electronic method for autofluorography of macromolecules on two-D matrices. [Patent application
Davidson, J.B.; Case, A.L.
1981-12-30
A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100 to 1000 times.
Human body motion capture from multi-image video sequences
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola
2003-01-01
In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.
Channel Model Optimization with Reflection Residual Component for Indoor MIMO-VLC System
NASA Astrophysics Data System (ADS)
Chen, Yong; Li, Tengfei; Liu, Huanlin; Li, Yichao
2017-12-01
A fast channel modeling method is studied to solve the problem of reflection channel gain for multiple input multiple output-visible light communications (MIMO-VLC) in the paper. For reducing the computational complexity when associating with the reflection times, no more than 3 reflections are taken into consideration in VLC. We think that higher order reflection link consists of corresponding many times line of sight link and firstly present reflection residual component to characterize higher reflection (more than 2 reflections). We perform computer simulation results for point-to-point channel impulse response, receiving optical power and receiving signal to noise ratio. Based on theoretical analysis and simulation results, the proposed method can effectively reduce the computational complexity of higher order reflection in channel modeling.
NASA Technical Reports Server (NTRS)
Jefferies, S. M.; Duvall, T. L., Jr.
1991-01-01
A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.
Handheld laser scanner automatic registration based on random coding
NASA Astrophysics Data System (ADS)
He, Lei; Yu, Chun-ping; Wang, Li
2011-06-01
Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser scanner. Theory analysis and experiment shows the method is reasonable and efficient.
Symmetry, Hopf bifurcation, and the emergence of cluster solutions in time delayed neural networks.
Wang, Zhen; Campbell, Sue Ann
2017-11-01
We consider the networks of N identical oscillators with time delayed, global circulant coupling, modeled by a system of delay differential equations with Z N symmetry. We first study the existence of Hopf bifurcations induced by the coupling time delay and then use symmetric Hopf bifurcation theory to determine how these bifurcations lead to different patterns of symmetric cluster oscillations. We apply our results to a case study: a network of FitzHugh-Nagumo neurons with diffusive coupling. For this model, we derive the asymptotic stability, global asymptotic stability, absolute instability, and stability switches of the equilibrium point in the plane of coupling time delay (τ) and excitability parameter (a). We investigate the patterns of cluster oscillations induced by the time delay and determine the direction and stability of the bifurcating periodic orbits by employing the multiple timescales method and normal form theory. We find that in the region where stability switching occurs, the dynamics of the system can be switched from the equilibrium point to any symmetric cluster oscillation, and back to equilibrium point as the time delay is increased.
Real-time EEG-based detection of fatigue driving danger for accident prediction.
Wang, Hong; Zhang, Chi; Shi, Tianwei; Wang, Fuwang; Ma, Shujun
2015-03-01
This paper proposes a real-time electroencephalogram (EEG)-based detection method of the potential danger during fatigue driving. To determine driver fatigue in real time, wavelet entropy with a sliding window and pulse coupled neural network (PCNN) were used to process the EEG signals in the visual area (the main information input route). To detect the fatigue danger, the neural mechanism of driver fatigue was analyzed. The functional brain networks were employed to track the fatigue impact on processing capacity of brain. The results show the overall functional connectivity of the subjects is weakened after long time driving tasks. The regularity is summarized as the fatigue convergence phenomenon. Based on the fatigue convergence phenomenon, we combined both the input and global synchronizations of brain together to calculate the residual amount of the information processing capacity of brain to obtain the dangerous points in real time. Finally, the danger detection system of the driver fatigue based on the neural mechanism was validated using accident EEG. The time distributions of the output danger points of the system have a good agreement with those of the real accident points.
Fast, adaptive summation of point forces in the two-dimensional Poisson equation
NASA Technical Reports Server (NTRS)
Van Dommelen, Leon; Rundensteiner, Elke A.
1989-01-01
A comparatively simple procedure is presented for the direct summation of the velocity field introduced by point vortices which significantly reduces the required number of operations by replacing selected partial sums by asymptotic series. Tables are presented which demonstrate the speed of this algorithm in terms of the mere doubling of computational time in dealing with a doubling of the number of vortices; current methods involve a computational time extension by a factor of 4. This procedure need not be restricted to the solution of the Poisson equation, and may be applied to other problems involving groups of points in which the interaction between elements of different groups can be simplified when the distance between groups is sufficiently great.
ON THE CALIBRATION OF DK-02 AND KID DOSIMETERS (in Estonian)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehvaert, H.
1963-01-01
For the periodic calibration of the DK-02 and WD dosimeters, the rotating stand method which is more advantageous than the usual method is recommended. The calibration can be accomplished in a strong gamma field, reducing considerably the time necessary for calibration. Using a point source, the dose becomes a simple function of time and geometrical parameters. The experimental values are in good agreement with theoretical values. (tr-auth)
Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha
2015-09-01
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.
Hegazy, Galal; Safwat, Hesham; Seddik, Mahmoud; Al-shal, Ehab A.; Al-Sebai, Ibrahim; Negm, Mohame
2016-01-01
Background: The optimal operative method for acromioclavicular joint reconstruction remains controversial. The modified Weaver-Dunn method is one of the most popular methods. Anatomic reconstruction of coracoclavicular ligaments with autogenous tendon grafts, widely used in treating chronic acromioclavicular joint instability, reportedly diminishes pain, eliminates sequelae, and improves function as well as strength. Objective: To compare clinical and radiologic outcomes between a modified Weaver-Dunn procedure and an anatomic coracoclavicular ligaments reconstruction technique using autogenous semitendinosus tendon graft. Methods: Twenty patients (mean age, 39 years) with painful, chronic Rockwood type III acromioclavicular joint dislocations were subjected to surgical reconstruction. In ten patients, a modified Weaver-Dunn procedure was performed, in the other ten patients; autogenous semitendinosus tendon graft was used. The mean time between injury and the index procedure was 18 month (range from 9 – 28). Clinical evaluation was performed using the Oxford Shoulder Score and Nottingham Clavicle Score after a mean follow-up time of 27.8 months. Preoperative and postoperative radiographs were compared. Results: In the Weaver-Dunn group the Oxford Shoulder Score improved from 25±4 to 40±2 points. While the Nottingham Clavicle Score increased from 48±7 to 84±11. In semitendinosus tendon graft group, the Oxford Shoulder Score improved from 25±3 points to 50±2 points and the Nottingham Clavicle Score from 48±8 points to 95±8, respectively. Conclusion: Acromioclavicular joint reconstruction using the semitendinosus tendon graft achieved better Oxford Shoulder Score and Nottingham Clavicle Score compared to the modified Weaver-Dunn procedure. PMID:27347245
Influences of rolling method on deformation force in cold roll-beating forming process
NASA Astrophysics Data System (ADS)
Su, Yongxiang; Cui, Fengkui; Liang, Xiaoming; Li, Yan
2018-03-01
In process, the research object, the gear rack was selected to study the influence law of rolling method on the deformation force. By the mean of the cold roll forming finite element simulation, the variation regularity of radial and tangential deformation was analysed under different rolling methods. The variation of deformation force of the complete forming racks and the single roll during the steady state under different rolling modes was analyzed. The results show: when upbeating and down beating, radial single point average force is similar, the tangential single point average force gap is bigger, the gap of tangential single point average force is relatively large. Add itionally, the tangential force at the time of direct beating is large, and the dire ction is opposite with down beating. With directly beating, deformation force loading fast and uninstall slow. Correspondingly, with down beating, deformat ion force loading slow and uninstall fast.
Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research
Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi
2016-01-01
The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637
Reconstruction method for fringe projection profilometry based on light beams.
Li, Xuexing; Zhang, Zhijiang; Yang, Chen
2016-12-01
A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.
NASA Astrophysics Data System (ADS)
Zamani, Pooria; Kayvanrad, Mohammad; Soltanian-Zadeh, Hamid
2012-12-01
This article presents a compressive sensing approach for reducing data acquisition time in cardiac cine magnetic resonance imaging (MRI). In cardiac cine MRI, several images are acquired throughout the cardiac cycle, each of which is reconstructed from the raw data acquired in the Fourier transform domain, traditionally called k-space. In the proposed approach, a majority, e.g., 62.5%, of the k-space lines (trajectories) are acquired at the odd time points and a minority, e.g., 37.5%, of the k-space lines are acquired at the even time points of the cardiac cycle. Optimal data acquisition at the even time points is learned from the data acquired at the odd time points. To this end, statistical features of the k-space data at the odd time points are clustered by fuzzy c-means and the results are considered as the states of Markov chains. The resulting data is used to train hidden Markov models and find their transition matrices. Then, the trajectories corresponding to transition matrices far from an identity matrix are selected for data acquisition. At the end, an iterative thresholding algorithm is used to reconstruct the images from the under-sampled k-space datasets. The proposed approaches for selecting the k-space trajectories and reconstructing the images generate more accurate images compared to alternative methods. The proposed under-sampling approach achieves an acceleration factor of 2 for cardiac cine MRI.
An improved parallel fuzzy connected image segmentation method based on CUDA.
Wang, Liansheng; Li, Dong; Huang, Shaohui
2016-05-12
Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-28
... systems. E. Quantitative Methods for Comparing Capital Frameworks The NPR sought comment on how the... industry while assessing levels of capital. This commenter points out maintaining reliable comparative data over time could make quantitative methods for this purpose difficult. For example, evaluating asset...
Kayano, Mitsunori; Matsui, Hidetoshi; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru
2016-04-01
High-throughput time course expression profiles have been available in the last decade due to developments in measurement techniques and devices. Functional data analysis, which treats smoothed curves instead of originally observed discrete data, is effective for the time course expression profiles in terms of dimension reduction, robustness, and applicability to data measured at small and irregularly spaced time points. However, the statistical method of differential analysis for time course expression profiles has not been well established. We propose a functional logistic model based on elastic net regularization (F-Logistic) in order to identify the genes with dynamic alterations in case/control study. We employ a mixed model as a smoothing method to obtain functional data; then F-Logistic is applied to time course profiles measured at small and irregularly spaced time points. We evaluate the performance of F-Logistic in comparison with another functional data approach, i.e. functional ANOVA test (F-ANOVA), by applying the methods to real and synthetic time course data sets. The real data sets consist of the time course gene expression profiles for long-term effects of recombinant interferon β on disease progression in multiple sclerosis. F-Logistic distinguishes dynamic alterations, which cannot be found by competitive approaches such as F-ANOVA, in case/control study based on time course expression profiles. F-Logistic is effective for time-dependent biomarker detection, diagnosis, and therapy. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Multiview 3D sensing and analysis for high quality point cloud reconstruction
NASA Astrophysics Data System (ADS)
Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard
2018-04-01
Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.
Amato, Ernesto; Campennì, Alfredo; Leotta, Salvatore; Ruggeri, Rosaria M; Baldari, Sergio
2016-06-01
Radioiodine therapy is an effective and safe treatment of hyperthyroidism due to Graves' disease, toxic adenoma, toxic multinodular goiter. We compared the outcomes of a traditional calculation method based on an analytical fit of the uptake curve and subsequent dose calculation with the MIRD approach, and an alternative computation approach based on a formulation implemented in a public-access website, searching for the best timing of radioiodine uptake measurements in pre-therapeutic dosimetry. We report about sixty-nine hyperthyroid patients that were treated after performing a pre-therapeutic dosimetry calculated by fitting a six-point uptake curve (3-168h). In order to evaluate the results of the radioiodine treatment, patients were followed up to sixty-four months after treatment (mean 47.4±16.9). Patient dosimetry was then retrospectively recalculated with the two above-mentioned methods. Several time schedules for uptake measurements were considered, with different timings and total number of points. Early time schedules, sampling uptake up to 48h, do not allow to set-up an accurate treatment plan, while schedules including the measurement at one week give significantly better results. The analytical fit procedure applied to the three-point time schedule 3(6)-24-168h gave results significantly more accurate than the website approach exploiting either the same schedule, or the single measurement at 168h. Consequently, the best strategy among the ones considered is to sample the uptake at 3(6)-24-168h, and carry out an analytical fit of the curve, while extra measurements at 48 and 72h lead only marginal improvements in the accuracy of therapeutic activity determination. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Time Series Expression Analyses Using RNA-seq: A Statistical Approach
Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P.
2013-01-01
RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis. PMID:23586021
Time series expression analyses using RNA-seq: a statistical approach.
Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P
2013-01-01
RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis.
NASA Astrophysics Data System (ADS)
Du, Ruiling; Wu, Keng; Zhang, Jiazhi; Zhao, Yong
Reaction kinetics of metallurgical physical chemistry which was successfully applied in metallurgy (as ferrous metallurgy, non-ferrous metallurgy) became an important theoretical foundation for subject system of traditional metallurgy. Not only the research methods were very perfect, but also the independent structures and systems of it had been formed. One of the important tasks of metallurgical reaction engineering was the simulation of metallurgical process. And then, the mechanism of reaction process and the conversion time points of different control links should be obtained accurately. Therefore, the research methods and results of reaction kinetics in metallurgical physical chemistry were not very suitable for metallurgical reaction engineering. In order to provide the definite conditions of transmission, reaction kinetics parameters and the conversion time points of different control links for solving the transmission and reaction equations in metallurgical reaction engineering, a new method for researching kinetics mechanisms in metallurgical reaction engineering was proposed, which was named stepwise attempt method. Then the comparison of results between the two methods and the further development of stepwise attempt method were discussed in this paper. As a new research method for reaction kinetics in metallurgical reaction engineering, stepwise attempt method could not only satisfy the development of metallurgical reaction engineering, but also provide necessary guarantees for establishing its independent subject system.
Estimation of network path segment delays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, Kathleen Marie
A method for estimation of a network path segment delay includes determining a scaled time stamp for each packet of a plurality of packets by scaling a time stamp for each respective packet to minimize a difference of at least one of a frequency and a frequency drift between a transport protocol clock of a host and a monitoring point. The time stamp for each packet is provided by the transport protocol clock of the host. A corrected time stamp for each packet is determined by removing from the scaled time stamp for each respective packet, a temporal offset betweenmore » the transport protocol clock and the monitoring clock by minimizing a temporal delay variation of the plurality of packets traversing a segment between the host and the monitoring point.« less
Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.
Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.
Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface
Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321
Estimating Aircraft Heading Based on Laserscanner Derived Point Clouds
NASA Astrophysics Data System (ADS)
Koppanyi, Z.; Toth, C., K.
2015-03-01
Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)
1987-01-01
The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.
Afonso, J; Lopes, S; Gonçalves, R; Caldeira, P; Lago, P; Tavares de Sousa, H; Ramos, J; Gonçalves, A R; Ministro, P; Rosa, I; Vieira, A I; Dias, C C; Magro, F
2016-10-01
Therapeutic drug monitoring is a powerful strategy known to improve the clinical outcomes and to optimise the healthcare resources in the treatment of autoimmune diseases. Currently, most of the methods commercially available for the quantification of infliximab (IFX) are ELISA-based, with a turnaround time of approximately 8 h, and delaying the target dosage adjustment to the following infusion. To validate the first point-of-care IFX quantification device available in the market - the Quantum Blue Infliximab assay (Buhlmann, Schonenbuch, Switzerland) - by comparing it with two well-established methods. The three methods were used to assay the IFX concentration of spiked samples and of the serum of 299 inflammatory bowel diseases (IBD) patients undergoing IFX therapy. The point-of-care assay had an average IFX recovery of 92%, being the most precise among the tested methods. The Intraclass Correlation Coefficients of the point-of-care IFX assay vs. the two ELISA-based established methods were 0.889 and 0.939. Moreover, the accuracy of the point-of-care IFX compared with each of the two reference methods was 77% and 83%, and the kappa statistics revealed a substantial agreement (0.648 and 0.738). The Quantum Blue IFX assay can successfully replace the commonly used ELISA-based IFX quantification kits. This point-of-care IFX assay is able to deliver the results within 15 min makes it ideal for an immediate target concentration adjusted dosing. Moreover, it is a user-friendly desktop device that does not require specific laboratory facilities or highly specialised personnel. © 2016 John Wiley & Sons Ltd.
Cubature versus Fekete-Gauss nodes for spectral element methods on simplicial meshes
NASA Astrophysics Data System (ADS)
Pasquetti, Richard; Rapetti, Francesca
2017-10-01
In a recent JCP paper [9], a higher order triangular spectral element method (TSEM) is proposed to address seismic wave field modeling. The main interest of this TSEM is that the mass matrix is diagonal, so that an explicit time marching becomes very cheap. This property results from the fact that, similarly to the usual SEM (say QSEM), the basis functions are Lagrange polynomials based on a set of points that shows both nice interpolation and quadrature properties. In the quadrangle, i.e. for the QSEM, the set of points is simply obtained by tensorial product of Gauss-Lobatto-Legendre (GLL) points. In the triangle, finding such an appropriate set of points is however not trivial. Thus, the work of [9] follows anterior works that started in 2000's [2,6,11] and now provides cubature nodes and weights up to N = 9, where N is the total degree of the polynomial approximation. Here we wish to evaluate the accuracy of this cubature nodes TSEM with respect to the Fekete-Gauss one, see e.g.[12], that makes use of two sets of points, namely the Fekete points and the Gauss points of the triangle for interpolation and quadrature, respectively. Because the Fekete-Gauss TSEM is in the spirit of any nodal hp-finite element methods, one may expect that the conclusions of this Note will remain relevant if using other sets of carefully defined interpolation points.
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
NASA Astrophysics Data System (ADS)
Lenoir, Guillaume; Crucifix, Michel
2018-03-01
We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion article (Lenoir and Crucifix, 2018). All the methods presented in this paper are available to the reader in the Python package WAVEPAL.
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.; Iijima, Byron; Meyer, Robert; Bar-Sever, Yoaz; Accad, Elie
2004-01-01
This paper evaluates the performance of a single-frequency receiver using the 1-Hz differential corrections as provided by NASA's global differential GPS system. While the dual-frequency user has the ability to eliminate the ionosphere error by taking a linear combination of observables, the single-frequency user must remove or calibrate this error by other means. To remove the ionosphere error we take advantage of the fact that the magnitude of the group delay in range observable and the carrier phase advance have the same magnitude but are opposite in sign. A way to calibrate this error is to use a real-time database of grid points computed by JPL's RTI (Real-Time Ionosphere) software. In both cases we evaluate the positional accuracy of a kinematic carrier phase based point positioning method on a global extent.
Variance change point detection for fractional Brownian motion based on the likelihood ratio test
NASA Astrophysics Data System (ADS)
Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz
2018-01-01
Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.
Jivănescu, Anca; Bratu, Dana Cristina; Tomescu, Lucian; Măroiu, Alexandra Cristina; Popa, George; Bratu, Emanuel Adrian
2015-01-01
Using two measurement methods (a three-dimensional laser scanning system and a digital caliper), this study compares the lower face morphology of complete edentulous patients, before and after prosthodontic rehabilitation with bimaxillary complete dentures. Fourteen edentulous patients were randomly selected from the Department of Prosthodontics, at the Faculty of Dental Medicine, "Victor Babes" University of Medicine and Pharmacy, Timisoara, Romania. The changes that occurred in the lower third of the face after prosthodontic treatment were assessed quantitatively by measuring the vertical projection of the distances between two sets of anthropometric landmarks: Subnasale - cutaneous Pogonion (D1) and Labiale superius - Labiale inferius (D2). A two-way repeated measures ANOVA model design was carried out to test for significant interactions, main effects and differences between the two types of measuring devices and between the initial and final rehabilitation time points. The main effect of the type of measuring device showed no statistically significant differences in the measured distances (p=0.24 for D1 and p=0.39 for D2), between the initial and the final rehabilitation time points. Regarding the main effect of time, there were statistically significant differences in both the measured distances D1 and D2 (p=0.001), between the initial and the final rehabilitation time points. The two methods of measurement were equally reliable in the assessment of lower face morphology changes in edentulous patients after prosthodontic rehabilitation with bimaxillary complete dentures. The differences between the measurements taken before and after prosthodontic rehabilitation proved to be statistically significant.
Detection method of flexion relaxation phenomenon based on wavelets for patients with low back pain
NASA Astrophysics Data System (ADS)
Nougarou, François; Massicotte, Daniel; Descarreaux, Martin
2012-12-01
The flexion relaxation phenomenon (FRP) can be defined as a reduction or silence of myoelectric activity of the lumbar erector spinae muscle during full trunk flexion. It is typically absent in patients with chronic low back pain (LBP). Before any broad clinical utilization of this neuromuscular response can be made, effective, standardized, and accurate methods of identifying FRP limits are needed. However, this phenomenon is clearly more difficult to detect for LBP patients than for healthy patients. The main goal of this study is to develop an automated method based on wavelet transformation that would improve time point limits detection of surface electromyography signals of the FRP in case of LBP patients. Conventional visual identification and proposed automated methods of time point limits detection of relaxation phase were compared on experimental data using criteria of accuracy and repeatability based on physiological properties. The evaluation demonstrates that the use of wavelet transform (WT) yields better results than methods without wavelet decomposition. Furthermore, methods based on wavelet per packet transform are more effective than algorithms employing discrete WT. Compared to visual detection, in addition to demonstrating an obvious saving of time, the use of wavelet per packet transform improves the accuracy and repeatability in the detection of the FRP limits. These results clearly highlight the value of the proposed technique in identifying onset and offset of the flexion relaxation response in LBP subjects.
Mapped Chebyshev Pseudo-Spectral Method for Dynamic Aero-Elastic Problem of Limit Cycle Oscillation
NASA Astrophysics Data System (ADS)
Im, Dong Kyun; Kim, Hyun Soon; Choi, Seongim
2018-05-01
A mapped Chebyshev pseudo-spectral method is developed as one of the Fourier-spectral approaches and solves nonlinear PDE systems for unsteady flows and dynamic aero-elastic problem in a given time interval, where the flows or elastic motions can be periodic, nonperiodic, or periodic with an unknown frequency. The method uses the Chebyshev polynomials of the first kind for the basis function and redistributes the standard Chebyshev-Gauss-Lobatto collocation points more evenly by a conformal mapping function for improved numerical stability. Contributions of the method are several. It can be an order of magnitude more efficient than the conventional finite difference-based, time-accurate computation, depending on the complexity of solutions and the number of collocation points. The method reformulates the dynamic aero-elastic problem in spectral form for coupled analysis of aerodynamics and structures, which can be effective for design optimization of unsteady and dynamic problems. A limit cycle oscillation (LCO) is chosen for the validation and a new method to determine the LCO frequency is introduced based on the minimization of a second derivative of the aero-elastic formulation. Two examples of the limit cycle oscillation are tested: nonlinear, one degree-of-freedom mass-spring-damper system and two degrees-of-freedom oscillating airfoil under pitch and plunge motions. Results show good agreements with those of the conventional time-accurate simulations and wind tunnel experiments.
An adaptive clustering algorithm for image matching based on corner feature
NASA Astrophysics Data System (ADS)
Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song
2018-04-01
The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.
A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform
Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.
2013-01-01
Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014
NASA Astrophysics Data System (ADS)
Qiu, Mo; Yu, Simin; Wen, Yuqiong; Lü, Jinhu; He, Jianbin; Lin, Zhuosheng
In this paper, a novel design methodology and its FPGA hardware implementation for a universal chaotic signal generator is proposed via the Verilog HDL fixed-point algorithm and state machine control. According to continuous-time or discrete-time chaotic equations, a Verilog HDL fixed-point algorithm and its corresponding digital system are first designed. In the FPGA hardware platform, each operation step of Verilog HDL fixed-point algorithm is then controlled by a state machine. The generality of this method is that, for any given chaotic equation, it can be decomposed into four basic operation procedures, i.e. nonlinear function calculation, iterative sequence operation, iterative values right shifting and ceiling, and chaotic iterative sequences output, each of which corresponds to only a state via state machine control. Compared with the Verilog HDL floating-point algorithm, the Verilog HDL fixed-point algorithm can save the FPGA hardware resources and improve the operation efficiency. FPGA-based hardware experimental results validate the feasibility and reliability of the proposed approach.