Multigrid methods for bifurcation problems: The self adjoint case
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1987-01-01
This paper deals with multigrid methods for computational problems that arise in the theory of bifurcation and is restricted to the self adjoint case. The basic problem is to solve for arcs of solutions, a task that is done successfully with an arc length continuation method. Other important issues are, for example, detecting and locating singular points as part of the continuation process, switching branches at bifurcation points, etc. Multigrid methods have been applied to continuation problems. These methods work well at regular points and at limit points, while they may encounter difficulties in the vicinity of bifurcation points. A new continuation method that is very efficient also near bifurcation points is presented here. The other issues mentioned above are also treated very efficiently with appropriate multigrid algorithms. For example, it is shown that limit points and bifurcation points can be solved for directly by a multigrid algorithm. Moreover, the algorithms presented here solve the corresponding problems in just a few work units (about 10 or less), where a work unit is the work involved in one local relaxation on the finest grid.
Hongyi Xu; Barbic, Jernej
2017-01-01
We present an algorithm for fast continuous collision detection between points and signed distance fields, and demonstrate how to robustly use it for 6-DoF haptic rendering of contact between objects with complex geometry. Continuous collision detection is often needed in computer animation, haptics, and virtual reality applications, but has so far only been investigated for polygon (triangular) geometry representations. We demonstrate how to robustly and continuously detect intersections between points and level sets of the signed distance field. We suggest using an octree subdivision of the distance field for fast traversal of distance field cells. We also give a method to resolve continuous collisions between point clouds organized into a tree hierarchy and a signed distance field, enabling rendering of contact between rigid objects with complex geometry. We investigate and compare two 6-DoF haptic rendering methods now applicable to point-versus-distance field contact for the first time: continuous integration of penalty forces, and a constraint-based method. An experimental comparison to discrete collision detection demonstrates that the continuous method is more robust and can correctly resolve collisions even under high velocities and during complex contact.
Ewald Electrostatics for Mixtures of Point and Continuous Line Charges.
Antila, Hanne S; Tassel, Paul R Van; Sammalkorpi, Maria
2015-10-15
Many charged macro- or supramolecular systems, such as DNA, are approximately rod-shaped and, to the lowest order, may be treated as continuous line charges. However, the standard method used to calculate electrostatics in molecular simulation, the Ewald summation, is designed to treat systems of point charges. We extend the Ewald concept to a hybrid system containing both point charges and continuous line charges. We find the calculated force between a point charge and (i) a continuous line charge and (ii) a discrete line charge consisting of uniformly spaced point charges to be numerically equivalent when the separation greatly exceeds the discretization length. At shorter separations, discretization induces deviations in the force and energy, and point charge-point charge correlation effects. Because significant computational savings are also possible, the continuous line charge Ewald method presented here offers the possibility of accurate and efficient electrostatic calculations.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Campos, Jennifer L.; Siegle, Joshua H.; Mohler, Betty J.; Bülthoff, Heinrich H.; Loomis, Jack M.
2009-01-01
Background The extent to which actual movements and imagined movements maintain a shared internal representation has been a matter of much scientific debate. Of the studies examining such questions, few have directly compared actual full-body movements to imagined movements through space. Here we used a novel continuous pointing method to a) provide a more detailed characterization of self-motion perception during actual walking and b) compare the pattern of responding during actual walking to that which occurs during imagined walking. Methodology/Principal Findings This continuous pointing method requires participants to view a target and continuously point towards it as they walk, or imagine walking past it along a straight, forward trajectory. By measuring changes in the pointing direction of the arm, we were able to determine participants' perceived/imagined location at each moment during the trajectory and, hence, perceived/imagined self-velocity during the entire movement. The specific pattern of pointing behaviour that was revealed during sighted walking was also observed during blind walking. Specifically, a peak in arm azimuth velocity was observed upon target passage and a strong correlation was observed between arm azimuth velocity and pointing elevation. Importantly, this characteristic pattern of pointing was not consistently observed during imagined self-motion. Conclusions/Significance Overall, the spatial updating processes that occur during actual self-motion were not evidenced during imagined movement. Because of the rich description of self-motion perception afforded by continuous pointing, this method is expected to have significant implications for several research areas, including those related to motor imagery and spatial cognition and to applied fields for which mental practice techniques are common (e.g. rehabilitation and athletics). PMID:19907655
NASA Astrophysics Data System (ADS)
Bailly, J. S.; Dartevelle, M.; Delenne, C.; Rousseau, A.
2017-12-01
Floodplain and major river bed topography govern many river biophysical processes during floods. Despite the grow of direct topographic measurements from LiDARS on riverine systems, it still room to develop methods for large (e.g. deltas) or very local (e.g. ponds) riverine systems that take advantage of information coming from simple SAR or optical image processing on floodplain, resulting from waterbodies delineation during flood up or down, and producing ordered coutour lines. The next challenge is thus to exploit such data in order to estimate continuous topography on the floodplain combining heterogeneous data: a topographic points dataset and a located but unknown and ordered contourline dataset. This article is comparing two methods designed to estimate continuous topography on the floodplain mixing ordinal coutour lines and continuous topographic points. For both methods a first estimation step is to value each contourline with elevation and a second step is next to estimate the continuous field from both topographic points and valued contourlines. The first proposed method is a stochastic method starting from multigaussian random-fields and conditional simualtion. The second is a deterministic method based on radial spline fonction for thin layers used for approximated bivariate surface construction. Results are first shown and discussed from a set of synoptic case studies presenting various topographic points density and topographic smoothness. Next, results are shown and discuss on an actual case study in the Montagua laguna, located in the north of Valparaiso, Chile.
NASA Astrophysics Data System (ADS)
Brown, T. G.; Lespez, L.; Sear, D. A.; Houben, P.; Klimek, K.
2016-12-01
Floodplain and major river bed topography govern many river biophysical processes during floods. Despite the grow of direct topographic measurements from LiDARS on riverine systems, it still room to develop methods for large (e.g. deltas) or very local (e.g. ponds) riverine systems that take advantage of information coming from simple SAR or optical image processing on floodplain, resulting from waterbodies delineation during flood up or down, and producing ordered coutour lines. The next challenge is thus to exploit such data in order to estimate continuous topography on the floodplain combining heterogeneous data: a topographic points dataset and a located but unknown and ordered contourline dataset. This article is comparing two methods designed to estimate continuous topography on the floodplain mixing ordinal coutour lines and continuous topographic points. For both methods a first estimation step is to value each contourline with elevation and a second step is next to estimate the continuous field from both topographic points and valued contourlines. The first proposed method is a stochastic method starting from multigaussian random-fields and conditional simualtion. The second is a deterministic method based on radial spline fonction for thin layers used for approximated bivariate surface construction. Results are first shown and discussed from a set of synoptic case studies presenting various topographic points density and topographic smoothness. Next, results are shown and discuss on an actual case study in the Montagua laguna, located in the north of Valparaiso, Chile.
Quadratic polynomial interpolation on triangular domain
NASA Astrophysics Data System (ADS)
Li, Ying; Zhang, Congcong; Yu, Qian
2018-04-01
In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan
2015-11-01
To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon=-2.7×10(-3) mm(-1), σrecon=7.0×10(-3) mm(-1)) and (μCT=-2.5×10(-3) mm(-1), σCT=5.3×10(-3) mm(-1)), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discretemore » models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.« less
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan
2015-01-01
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon = − 2.7 × 10−3 mm−1, σrecon = 7.0 × 10−3 mm−1) and (μCT = − 2.5 × 10−3 mm−1, σCT = 5.3 × 10−3 mm−1), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy. PMID:26520747
Exact extraction method for road rutting laser lines
NASA Astrophysics Data System (ADS)
Hong, Zhiming
2018-02-01
This paper analyzes the importance of asphalt pavement rutting detection in pavement maintenance and pavement administration in today's society, the shortcomings of the existing rutting detection methods are presented and a new rutting line-laser extraction method based on peak intensity characteristic and peak continuity is proposed. The intensity of peak characteristic is enhanced by a designed transverse mean filter, and an intensity map of peak characteristic based on peak intensity calculation for the whole road image is obtained to determine the seed point of the rutting laser line. Regarding the seed point as the starting point, the light-points of a rutting line-laser are extracted based on the features of peak continuity, which providing exact basic data for subsequent calculation of pavement rutting depths.
Minimizing Higgs potentials via numerical polynomial homotopy continuation
NASA Astrophysics Data System (ADS)
Maniatis, M.; Mehta, D.
2012-08-01
The study of models with extended Higgs sectors requires to minimize the corresponding Higgs potentials, which is in general very difficult. Here, we apply a recently developed method, called numerical polynomial homotopy continuation (NPHC), which guarantees to find all the stationary points of the Higgs potentials with polynomial-like non-linearity. The detection of all stationary points reveals the structure of the potential with maxima, metastable minima, saddle points besides the global minimum. We apply the NPHC method to the most general Higgs potential having two complex Higgs-boson doublets and up to five real Higgs-boson singlets. Moreover the method is applicable to even more involved potentials. Hence the NPHC method allows to go far beyond the limits of the Gröbner basis approach.
Applications of the gambling score in evaluating earthquake predictions and forecasts
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe
2010-05-01
This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.
Internet Point of Care Learning at a Community Hospital
ERIC Educational Resources Information Center
Sinusas, Keith
2009-01-01
Introduction: Internet point of care (PoC) learning is a relatively new method for obtaining continuing medical education credits. Few data are available to describe physician utilization of this CME activity. Methods: We describe the Internet point of care system we developed at a medium-sized community hospital and report on its first year of…
Jenkins, Martin
2016-01-01
Objective. In clinical trials of RA, it is common to assess effectiveness using end points based upon dichotomized continuous measures of disease activity, which classify patients as responders or non-responders. Although dichotomization generally loses statistical power, there are good clinical reasons to use these end points; for example, to allow for patients receiving rescue therapy to be assigned as non-responders. We adopt a statistical technique called the augmented binary method to make better use of the information provided by these continuous measures and account for how close patients were to being responders. Methods. We adapted the augmented binary method for use in RA clinical trials. We used a previously published randomized controlled trial (Oral SyK Inhibition in Rheumatoid Arthritis-1) to assess its performance in comparison to a standard method treating patients purely as responders or non-responders. The power and error rate were investigated by sampling from this study. Results. The augmented binary method reached similar conclusions to standard analysis methods but was able to estimate the difference in response rates to a higher degree of precision. Results suggested that CI widths for ACR responder end points could be reduced by at least 15%, which could equate to reducing the sample size of a study by 29% to achieve the same statistical power. For other end points, the gain was even higher. Type I error rates were not inflated. Conclusion. The augmented binary method shows considerable promise for RA trials, making more efficient use of patient data whilst still reporting outcomes in terms of recognized response end points. PMID:27338084
Critical viewpoints on the methods of realizing the metal freezing points of the ITS-90
NASA Astrophysics Data System (ADS)
Ma, C. K.
1995-08-01
The time-honored method for realizing the freezing point tf of a metal (in practice necessarily a dilute alloy) is that of continuous, slow freezing where the plateau temperature (which is the result of solidifying material's being so pure that its phase-transition temperature is observably constant) is measured. The freezing point being an equilibrium temperature, Ancsin considers this method to be inappropriate in principle: equilibrium between the solid and liquid phases cannot be achieved while the solid is being cooled to dispose of the releasing latent heat and while it is accreting at the expense of the liquid. In place of the continuous freezing method he has employed the pulse-heating method (in which the sample is allowed to approach equilibrium after each heat pulse) in his study of Ag; his measurements suggest that freezing can produce non-negligible errors. Here we examine both methods and conclude that the freezing method, employing an inside solid-liquid interface thermally isolated by an outside interface, can provide realizations of the highest accuracy; in either method, perturbation, by inducing solid-liquid phase transition continuously or intermittently, is essential for detecting equilibrium thermally. The respective merits and disadvantages of these two methods and also of the inner-melt method are discussed. We conclude that in a freezing-point measurement what is being measured is in effect the however minutely varying phase transition, and nonconstitutional equilibrium, temperature ti at the solid-liquid interface. The objective is then to measure the ti that is the best measure of tf, which is, normally, the plateau temperature.
A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer
NASA Astrophysics Data System (ADS)
Zheng, G.; Cheng, Y.; He, K.; Duan, F.; Ma, Y.
2014-01-01
Sunset Semi-Continuous Carbon Analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, here we identified a new type of SCCA calculation discrepancy caused by the default multi-point baseline correction method. When exceeding a certain threshold carbon load, multi-point correction could cause significant Total Carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples with three temperature protocols. For ambient samples, 22%, 36% and 12% TC was underestimated by the three protocols, respectively, with corresponding threshold being ~0, 20 and 25 μg C. For sucrose, however, such discrepancy was observed with only one of these protocols, indicating the need of more refractory SCCA calibration substance. The discrepancy was less significant for the NIOSH (National Institute for Occupational Safety and Health)-like protocol compared with the other two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments). Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. Proposed correction method was to use multi-point corrected data when below the determined threshold, while use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.
Gambling scores for earthquake predictions and forecasts
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang
2010-04-01
This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
Continuous Human Action Recognition Using Depth-MHI-HOG and a Spotter Model
Eum, Hyukmin; Yoon, Changyong; Lee, Heejin; Park, Mignon
2015-01-01
In this paper, we propose a new method for spotting and recognizing continuous human actions using a vision sensor. The method is comprised of depth-MHI-HOG (DMH), action modeling, action spotting, and recognition. First, to effectively separate the foreground from background, we propose a method called DMH. It includes a standard structure for segmenting images and extracting features by using depth information, MHI, and HOG. Second, action modeling is performed to model various actions using extracted features. The modeling of actions is performed by creating sequences of actions through k-means clustering; these sequences constitute HMM input. Third, a method of action spotting is proposed to filter meaningless actions from continuous actions and to identify precise start and end points of actions. By employing the spotter model, the proposed method improves action recognition performance. Finally, the proposed method recognizes actions based on start and end points. We evaluate recognition performance by employing the proposed method to obtain and compare probabilities by applying input sequences in action models and the spotter model. Through various experiments, we demonstrate that the proposed method is efficient for recognizing continuous human actions in real environments. PMID:25742172
NASA Astrophysics Data System (ADS)
Hirata, Hiroshi; Itoh, Toshiharu; Hosokawa, Kouichi; Deng, Yuanmu; Susaki, Hitoshi
2005-08-01
This article describes a systematic method for determining the cutoff frequency of the low-pass window function that is used for deconvolution in two-dimensional continuous-wave electron paramagnetic resonance (EPR) imaging. An evaluation function for the criterion used to select the cutoff frequency is proposed, and is the product of the effective width of the point spread function for a localized point signal and the noise amplitude of a resultant EPR image. The present method was applied to EPR imaging for a phantom, and the result of cutoff frequency selection was compared with that based on a previously reported method for the same projection data set. The evaluation function has a global minimum point that gives the appropriate cutoff frequency. Images with reasonably good resolution and noise suppression can be obtained from projections with an automatically selected cutoff frequency based on the present method.
2016-04-01
incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching architecture...Simulation Model, Quasi -Nonlinear, Piloted Simulation, Flight-Test Implications, System Identification, Off-Nominal Loading Extrapolation, Stability...incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching
An approach for the regularization of a power flow solution around the maximum loading point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kataoka, Y.
1992-08-01
In the conventional power flow solution, the boundary conditions are directly specified by active power and reactive power at each node, so that the singular point coincided with the maximum loading point. For this reason, the computations are often disturbed by ill-condition. This paper proposes a new method for getting the wide-range regularity by giving some modifications to the conventional power flow solution method, thereby eliminating the singular point or shifting it to the region with the voltage lower than that of the maximum loading point. Then, the continuous execution of V-P curves including maximum loading point is realized. Themore » efficiency and effectiveness of the method are tested in practical 598-nodes system in comparison with the conventional method.« less
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Riley, Richard D; Elia, Eleni G; Malin, Gemma; Hemming, Karla; Price, Malcolm P
2015-07-30
A prognostic factor is any measure that is associated with the risk of future health outcomes in those with existing disease. Often, the prognostic ability of a factor is evaluated in multiple studies. However, meta-analysis is difficult because primary studies often use different methods of measurement and/or different cut-points to dichotomise continuous factors into 'high' and 'low' groups; selective reporting is also common. We illustrate how multivariate random effects meta-analysis models can accommodate multiple prognostic effect estimates from the same study, relating to multiple cut-points and/or methods of measurement. The models account for within-study and between-study correlations, which utilises more information and reduces the impact of unreported cut-points and/or measurement methods in some studies. The applicability of the approach is improved with individual participant data and by assuming a functional relationship between prognostic effect and cut-point to reduce the number of unknown parameters. The models provide important inferential results for each cut-point and method of measurement, including the summary prognostic effect, the between-study variance and a 95% prediction interval for the prognostic effect in new populations. Two applications are presented. The first reveals that, in a multivariate meta-analysis using published results, the Apgar score is prognostic of neonatal mortality but effect sizes are smaller at most cut-points than previously thought. In the second, a multivariate meta-analysis of two methods of measurement provides weak evidence that microvessel density is prognostic of mortality in lung cancer, even when individual participant data are available so that a continuous prognostic trend is examined (rather than cut-points). © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Rabani, Eran; Reichman, David R.; Krilov, Goran; Berne, Bruce J.
2002-01-01
We present a method based on augmenting an exact relation between a frequency-dependent diffusion constant and the imaginary time velocity autocorrelation function, combined with the maximum entropy numerical analytic continuation approach to study transport properties in quantum liquids. The method is applied to the case of liquid para-hydrogen at two thermodynamic state points: a liquid near the triple point and a high-temperature liquid. Good agreement for the self-diffusion constant and for the real-time velocity autocorrelation function is obtained in comparison to experimental measurements and other theoretical predictions. Improvement of the methodology and future applications are discussed. PMID:11830656
Aryal, Arjun; Brooks, Benjamin A.; Reid, Mark E.; Bawden, Gerald W.; Pawlak, Geno
2012-01-01
Acquiring spatially continuous ground-surface displacement fields from Terrestrial Laser Scanners (TLS) will allow better understanding of the physical processes governing landslide motion at detailed spatial and temporal scales. Problems arise, however, when estimating continuous displacement fields from TLS point-clouds because reflecting points from sequential scans of moving ground are not defined uniquely, thus repeat TLS surveys typically do not track individual reflectors. Here, we implemented the cross-correlation-based Particle Image Velocimetry (PIV) method to derive a surface deformation field using TLS point-cloud data. We estimated associated errors using the shape of the cross-correlation function and tested the method's performance with synthetic displacements applied to a TLS point cloud. We applied the method to the toe of the episodically active Cleveland Corral Landslide in northern California using TLS data acquired in June 2005–January 2007 and January–May 2010. Estimated displacements ranged from decimeters to several meters and they agreed well with independent measurements at better than 9% root mean squared (RMS) error. For each of the time periods, the method provided a smooth, nearly continuous displacement field that coincides with independently mapped boundaries of the slide and permits further kinematic and mechanical inference. For the 2010 data set, for instance, the PIV-derived displacement field identified a diffuse zone of displacement that preceded by over a month the development of a new lateral shear zone. Additionally, the upslope and downslope displacement gradients delineated by the dense PIV field elucidated the non-rigid behavior of the slide.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Procedure for Mixing Base Fluids With Sediments (EPA Method 1646) 3 Appendix 3 to Subpart A of Part 435 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) OIL AND GAS EXTRACTION POINT...
An alternative extragradient projection method for quasi-equilibrium problems.
Chen, Haibin; Wang, Yiju; Xu, Yi
2018-01-01
For the quasi-equilibrium problem where the players' costs and their strategies both depend on the rival's decisions, an alternative extragradient projection method for solving it is designed. Different from the classical extragradient projection method whose generated sequence has the contraction property with respect to the solution set, the newly designed method possesses an expansion property with respect to a given initial point. The global convergence of the method is established under the assumptions of pseudomonotonicity of the equilibrium function and of continuity of the underlying multi-valued mapping. Furthermore, we show that the generated sequence converges to the nearest point in the solution set to the initial point. Numerical experiments show the efficiency of the method.
Write On with Continuous Stroke Point.
ERIC Educational Resources Information Center
Thurber, Donald N.
1983-01-01
The continuous stroke print program is intended to lead up to cursive writing by teaching printing using a consistent letter slant and a flowing rhythm absent in the traditional ball-stick method. This approach is also helpful in reading. (CL)
An automated model-based aim point distribution system for solar towers
NASA Astrophysics Data System (ADS)
Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen
2016-05-01
Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
NASA Astrophysics Data System (ADS)
Lösler, Michael; Haas, Rüdiger; Eschelbach, Cornelia
2013-08-01
The Global Geodetic Observing System (GGOS) requires sub-mm accuracy, automated and continual determinations of the so-called local tie vectors at co-location stations. Co-location stations host instrumentation for several space geodetic techniques and the local tie surveys involve the relative geometry of the reference points of these instruments. Thus, these reference points need to be determined in a common coordinate system, which is a particular challenge for rotating equipment like radio telescopes for geodetic Very Long Baseline Interferometry. In this work we describe a concept to achieve automated and continual determinations of radio telescope reference points with sub-mm accuracy. We developed a monitoring system, including Java-based sensor communication for automated surveys, network adjustment and further data analysis. This monitoring system was tested during a monitoring campaign performed at the Onsala Space Observatory in the summer of 2012. The results obtained in this campaign show that it is possible to perform automated determination of a radio telescope reference point during normal operations of the telescope. Accuracies on the sub-mm level can be achieved, and continual determinations can be realized by repeated determinations and recursive estimation methods.
Chen, Xiaofeng; Song, Qiankun; Li, Zhongshan; Zhao, Zhenjiang; Liu, Yurong
2018-07-01
This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively. Furthermore, a uniform sufficient condition on the existence, uniqueness, and global asymptotical stability of the equilibrium point is obtained for both continuous-time QVNNs and their discrete-time version. Finally, two numerical examples are provided to substantiate the effectiveness of the proposed results.
An efficient method to compute microlensed light curves for point sources
NASA Technical Reports Server (NTRS)
Witt, Hans J.
1993-01-01
We present a method to compute microlensed light curves for point sources. This method has the general advantage that all microimages contributing to the light curve are found. While a source moves along a straight line, all micro images are located either on the primary image track or on the secondary image tracks (loops). The primary image track extends from - infinity to + infinity and is made of many sequents which are continuously connected. All the secondary image tracks (loops) begin and end on the lensing point masses. The method can be applied to any microlensing situation with point masses in the deflector plane, even for the overcritical case and surface densities close to the critical. Furthermore, we present general rules to evaluate the light curve for a straight track arbitrary placed in the caustic network of a sample of many point masses.
An hp symplectic pseudospectral method for nonlinear optimal control
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong
2017-01-01
An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
NASA Astrophysics Data System (ADS)
Rostworowski, A.
2007-01-01
We adopt Leaver's [E. Leaver, {ITALIC Proc. R. Soc. Lond.} {A402}, 285 (1985)] method to determine quasi normal frequencies of the Schwarzschild black hole in higher (D geq 10) dimensions. In D-dimensional Schwarzschild metric, when D increases, more and more singularities, spaced uniformly on the unit circle |r|=1, approach the horizon at r=rh=1. Thus, a solution satisfying the outgoing wave boundary condition at the horizon must be continued to some mid point and only then the continued fraction condition can be applied. This prescription is general and applies to all cases for which, due to regular singularities on the way from the point of interest to the irregular singularity, Leaver's method in its original setting breaks down. We illustrate the method calculating gravitational vector and tensor quasinormal frequencies of the Schwarzschild black hole in D=11 and D=10 dimensions. We also give the details for the D=9 case, considered in the work of P. Bizoz, T. Chmaj, A. Rostworowski, B.G. Schmidt and Z. Tabor {ITALIC Phys. Rev.}{D72}, 121502(R) (2005) .
Efficient continuous-variable state tomography using Padua points
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
Further development of quantum technologies calls for efficient characterization methods for quantum systems. While recent work has focused on discrete systems of qubits, much remains to be done for continuous-variable systems such as a microwave mode in a cavity. We introduce a novel technique to reconstruct the full Husimi Q or Wigner function from measurements done at the Padua points in phase space, the optimal sampling points for interpolation in 2D. Our technique not only reduces the number of experimental measurements, but remarkably, also allows for the direct estimation of any density matrix element in the Fock basis, including off-diagonal elements. OLC acknowledges financial support from NSERC.
On stability of fixed points and chaos in fractional systems.
Edelman, Mark
2018-02-01
In this paper, we propose a method to calculate asymptotically period two sinks and define the range of stability of fixed points for a variety of discrete fractional systems of the order 0<α<2. The method is tested on various forms of fractional generalizations of the standard and logistic maps. Based on our analysis, we make a conjecture that chaos is impossible in the corresponding continuous fractional systems.
Dual domain material point method for multiphase flows
NASA Astrophysics Data System (ADS)
Zhang, Duan
2017-11-01
Although the particle-in-cell method was first invented in the 60's for fluid computations, one of its later versions, the material point method, is mostly used for solid calculations. Recent development of the multi-velocity formulations for multiphase flows and fluid-structure interactions requires the Lagrangian capability of the method be combined with Eulerian calculations for fluids. Because of different numerical representations of the materials, additional numerical schemes are needed to ensure continuity of the materials. New applications of the method to compute fluid motions have revealed numerical difficulties in various versions of the method. To resolve these difficulties, the dual domain material point method is introduced and improved. Unlike other particle based methods, the material point method uses both Lagrangian particles and Eulerian mesh, therefore it avoids direct communication between particles. With this unique property and the Lagrangian capability of the method, it is shown that a multiscale numerical scheme can be efficiently built based on the dual domain material point method. In this talk, the theoretical foundation of the method will be introduced. Numerical examples will be shown. Work sponsored by the next generation code project of LANL.
PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization
NASA Astrophysics Data System (ADS)
Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh
2017-05-01
Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.
Continuation Power Flow with Variable-Step Variable-Order Nonlinear Predictor
NASA Astrophysics Data System (ADS)
Kojima, Takayuki; Mori, Hiroyuki
This paper proposes a new continuation power flow calculation method for drawing a P-V curve in power systems. The continuation power flow calculation successively evaluates power flow solutions through changing a specified value of the power flow calculation. In recent years, power system operators are quite concerned with voltage instability due to the appearance of deregulated and competitive power markets. The continuation power flow calculation plays an important role to understand the load characteristics in a sense of static voltage instability. In this paper, a new continuation power flow with a variable-step variable-order (VSVO) nonlinear predictor is proposed. The proposed method evaluates optimal predicted points confirming with the feature of P-V curves. The proposed method is successfully applied to IEEE 118-bus and IEEE 300-bus systems.
NASA Astrophysics Data System (ADS)
Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone
2016-10-01
The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.
Accuracy of a continuous noninvasive hemoglobin monitor in intensive care unit patients.
Frasca, Denis; Dahyot-Fizelier, Claire; Catherine, Karen; Levrat, Quentin; Debaene, Bertrand; Mimoz, Olivier
2011-10-01
To determine whether noninvasive hemoglobin measurement by Pulse CO-Oximetry could provide clinically acceptable absolute and trend accuracy in critically ill patients, compared to other invasive methods of hemoglobin assessment available at bedside and the gold standard, the laboratory analyzer. Prospective study. Surgical intensive care unit of a university teaching hospital. Sixty-two patients continuously monitored with Pulse CO-Oximetry (Masimo Radical-7). None. Four hundred seventy-one blood samples were analyzed by a point-of-care device (HemoCue 301), a satellite lab CO-Oximeter (Siemens RapidPoint 405), and a laboratory hematology analyzer (Sysmex XT-2000i), which was considered the reference device. Hemoglobin values reported from the invasive methods were compared to the values reported by the Pulse CO-Oximeter at the time of blood draw. When the case-to-case variation was assessed, the bias and limits of agreement were 0.0±1.0 g/dL for the Pulse CO-Oximeter, 0.3±1.3g/dL for the point-of-care device, and 0.9±0.6 g/dL for the satellite lab CO-Oximeter compared to the reference method. Pulse CO-Oximetry showed similar trend accuracy as satellite lab CO-Oximetry, whereas the point-of-care device did not appear to follow the trend of the laboratory analyzer as well as the other test devices. When compared to laboratory reference values, hemoglobin measurement with Pulse CO-Oximetry has absolute accuracy and trending accuracy similar to widely used, invasive methods of hemoglobin measurement at bedside. Hemoglobin measurement with pulse CO-Oximetry has the additional advantages of providing continuous measurements, noninvasively, which may facilitate hemoglobin monitoring in the intensive care unit.
Collocation and Galerkin Time-Stepping Methods
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2011-01-01
We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.
1980-05-28
Total Deviation Angles and Measured Inlet Axial Velocity . . . . 55 ix LIST OF FIGURES (Continued) Figure Page 19 Points Defining Blade Sections of...distance from leading edge to point of maximum camber along chord line ar tip vortex core radius AVR axial velocity ratio (Vx /V x c chord length CL tip...yaw ceoefficient d longitudinal distance from leading edge to tip vortex calculation point G distance from chord line to maximum camber point K cascade
A new method to identify the foot of continental slope based on an integrated profile analysis
NASA Astrophysics Data System (ADS)
Wu, Ziyin; Li, Jiabiao; Li, Shoujun; Shang, Jihong; Jin, Xiaobin
2017-06-01
A new method is proposed to identify automatically the foot of the continental slope (FOS) based on the integrated analysis of topographic profiles. Based on the extremum points of the second derivative and the Douglas-Peucker algorithm, it simplifies the topographic profiles, then calculates the second derivative of the original profiles and the D-P profiles. Seven steps are proposed to simplify the original profiles. Meanwhile, multiple identification methods are proposed to determine the FOS points, including gradient, water depth and second derivative values of data points, as well as the concave and convex, continuity and segmentation of the topographic profiles. This method can comprehensively and intelligently analyze the topographic profiles and their derived slopes, second derivatives and D-P profiles, based on which, it is capable to analyze the essential properties of every single data point in the profile. Furthermore, it is proposed to remove the concave points of the curve and in addition, to implement six FOS judgment criteria.
Numerical optimization using flow equations.
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Numerical optimization using flow equations
NASA Astrophysics Data System (ADS)
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer
NASA Astrophysics Data System (ADS)
Zheng, G. J.; Cheng, Y.; He, K. B.; Duan, F. K.; Ma, Y. L.
2014-07-01
The Sunset semi-continuous carbon analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, in this study we identified a new type of SCCA calculation discrepancy caused by the default multipoint baseline correction method. When exceeding a certain threshold carbon load, multipoint correction could cause significant total carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples, with two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments) (i.e., IMPshort and IMPlong) and one NIOSH (National Institute for Occupational Safety and Health)-like protocol (rtNIOSH). For ambient samples, the IMPshort, IMPlong and rtNIOSH protocol underestimated 22, 36 and 12% of TC, respectively, with the corresponding threshold being ~ 0, 20 and 25 μgC. For sucrose, however, such discrepancy was observed only with the IMPshort protocol, indicating the need of more refractory SCCA calibration substance. Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. The correction method proposed was to use multipoint-corrected data when below the determined threshold, and use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.
Auditory Assessment of Children from a Psychologist's Point of View.
ERIC Educational Resources Information Center
Mira, Mary P.
Behavioral studies of listening in children with both normal and exceptional hearing are presented. The conjugate of assessing listening is discussed. This method provides a continuous record of ongoing behavior allowing for observation of moment-to-moment changes in listening. It determines how sustained, how strong, and how continuous a child's…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odano, I.; Takahashi, N.; Ohkubo, M.
1994-05-01
We developed a new method for quantitative measurement of rCBF with Iodine-123-IMP based on the microsphere model, which was accurate, more simple and relatively non-invasive than the continuous withdrawal method. IMP is assumed to behave as a chemical microsphere in the brain. Then regional CBF is measured by the continuous withdrawal of arterial blood and the microsphere model as follows: F=Cb(t)/integral Ca(t)*N, where F is rCBF (ml/100g/min), Cb(t) is the brain activity concentration. The integral Ca(t) is the total activity of arterial whole-blood withdrawn, and N is the fraction of the integral Ca(t) that is true tracer activity. We analyzedmore » 14 patients. A dose of 222 MBq of IMP was injected i.v. over 1 min, and withdrawal of the arterial blood was performed from 0 to 5 min (integral Ca(t)), after which arterial blood samples (one point Ca(t)) were obtained at 5, 6, 7, 8, 9, 10 min, respectively. Then the integral Ca(t) was mathematically inferred from the value of one point Ca(t). When we examined the correlation between integral Ca(t)*N and one point Ca(t), and % error of one point Ca(t) compared with integral Ca(t)*N, the minimum of the % error was 8.1% and the maximum of the correlation coefficient was 0.943, the both values of which were obtained at 6 min. We concluded that 6 min was the best time to take arterial blood sample by one point sampling method for assuming the integral Ca(t)*N. IMP SPECT studies were performed with a ring-type SPECT scanner, Compared with rCBF measured by Xe-133 method, a significant correlation was observed in this method (r=0.773). One point Ca(t) method is very easy and quickly for measurement of rCBF without inserting catheters and without arterial blood treatment with octanol.« less
Choleau, C; Klein, J C; Reach, G; Aussedat, B; Demaria-Pesce, V; Wilson, G S; Gifford, R; Ward, W K
2002-08-01
Calibration, i.e. the transformation in real time of the signal I(t) generated by the glucose sensor at time t into an estimation of glucose concentration G(t), represents a key issue for the development of a continuous glucose monitoring system. To compare two calibration procedures. In the one-point calibration, which assumes that I(o) is negligible, S is simply determined as the ratio I/G, and G(t) = I(t)/S. The two-point calibration consists in the determination of a sensor sensitivity S and of a background current I(o) by plotting two values of the sensor signal versus the concomitant blood glucose concentrations. The subsequent estimation of G(t) is given by G(t) = (I(t)-I(o))/S. A glucose sensor was implanted in the abdominal subcutaneous tissue of nine type 1 diabetic patients during 3 (n = 2) and 7 days (n = 7). The one-point calibration was performed a posteriori either once per day before breakfast, or twice per day before breakfast and dinner, or three times per day before each meal. The two-point calibration was performed each morning during breakfast. The percentages of points present in zones A and B of the Clarke Error Grid were significantly higher when the system was calibrated using the one-point calibration. Use of two one-point calibrations per day before meals was virtually as accurate as three one-point calibrations. This study demonstrates the feasibility of a simple method for calibrating a continuous glucose monitoring system.
Moja, Lorenzo; Kwag, Koren Hyogene
2015-01-01
The structure and aim of continuing medical education (CME) is shifting from the passive transmission of knowledge to a competency-based model focused on professional development. Self-directed learning is emerging as the foremost educational method for advancing competency-based CME. In a field marked by the constant expansion of knowledge, self-directed learning allows physicians to tailor their learning strategy to meet the information needs of practice. Point of care information services are innovative tools that provide health professionals with digested evidence at the front line to guide decision making. By mobilising self-directing learning to meet the information needs of clinicians at the bedside, point of care information services represent a promising platform for competency-based CME. Several points, however, must be considered to enhance the accessibility and development of these tools to improve competency-based CME and the quality of care. PMID:25655251
de Senneville, Baudouin Denis; Mougenot, Charles; Moonen, Chrit T W
2007-02-01
Focused ultrasound (US) is a unique and noninvasive technique for local deposition of thermal energy deep inside the body. MRI guidance offers the additional benefits of excellent target visualization and continuous temperature mapping. However, treating a moving target poses severe problems because 1) motion-related thermometry artifacts must be corrected, 2) the US focal point must be relocated according to the target displacement. In this paper a complete MRI-compatible, high-intensity focused US (HIFU) system is described together with adaptive methods that allow continuous MR thermometry and therapeutic US with real-time tracking of a moving target, online motion correction of the thermometry maps, and regional temperature control based on the proportional, integral, and derivative method. The hardware is based on a 256-element phased-array transducer with rapid electronic displacement of the focal point. The exact location of the target during US firing is anticipated using automatic analysis of periodic motions. The methods were tested with moving phantoms undergoing either rigid body or elastic periodical motions. The results show accurate tracking of the focal point. Focal and regional temperature control is demonstrated with a performance similar to that obtained with stationary phantoms. Copyright (c) 2007 Wiley-Liss, Inc.
The Current Status of Peer Assessment Techniques and Sociometric Methods
ERIC Educational Resources Information Center
Bukowski, William M.; Castellanos, Melisa; Persram, Ryan J.
2017-01-01
Current issues in the use of peer assessment techniques and sociometric methods are discussed. Attention is paid to the contributions of the four articles in this volume. Together these contributions point to the continual level of change and progress in these techniques. They also show that the paradigm underlying these methods has been unchanged…
NASA Astrophysics Data System (ADS)
Hu, Xiao; Katori, Makoto; Suzuki, Masuo
1987-11-01
Two kinds of systematic mean-field transfer-matrix methods are formulated in the 2-dimensional Ising spin system, by introducing Weiss-like and Bethe-like approximations. All the critical exponents as well as the true critical point can be estimated in these methods following the CAM procedure. The numerical results of the above system are Tc*≃2.271 (J/kB), γ{=}γ'≃1.749, β≃0.131 and δ≃15.1. The specific heat is confirmd to be continuous and to have a logarithmic divergence at the true critical point, i.e., α{=}α'{=}0. Thus, the finite-degree-of-approximation scaling ansatz is shown to be correct and very powerful in practical estimations of the critical exponents as well as the true critical point.
Durable Hybrid Coatings. Annual Performance Report (2008)
2008-09-01
points 162ensuring stabilization of the reading before moving to the next point. 163Two different thermogravimetric analysis (TGA) methods were...aluminum alloy (Al 2024). Mg-rich primers based on a hybrid organic-inorganic binder derived from silica nanoparticles and...phenethyltrimethoxysilane gave excellent corrosion protection of Al 2024-T3. Work has continued on these coatings with particular emphasis on the silica nanoparticle
On determining the most appropriate test cut-off value: the case of tests with continuous results
Habibzadeh, Parham; Yadollahie, Mahboobeh
2016-01-01
There are several criteria for determination of the most appropriate cut-off value in a diagnostic test with continuous results. Mostly based on receiver operating characteristic (ROC) analysis, there are various methods to determine the test cut-off value. The most common criteria are the point on ROC curve where the sensitivity and specificity of the test are equal; the point on the curve with minimum distance from the left-upper corner of the unit square; and the point where the Youden’s index is maximum. There are also methods mainly based on Bayesian decision analysis. Herein, we show that a proposed method that maximizes the weighted number needed to misdiagnose, an index of diagnostic test effectiveness we previously proposed, is the most appropriate technique compared to the aforementioned ones. For determination of the cut-off value, we need to know the pretest probability of the disease of interest as well as the costs incurred by misdiagnosis. This means that even for a certain diagnostic test, the cut-off value is not universal and should be determined for each region and for each disease condition. PMID:27812299
NASA Astrophysics Data System (ADS)
Cao, Shu-Lei; Duan, Xiao-Wei; Meng, Xiao-Lei; Zhang, Tong-Jie
2018-04-01
Aiming at exploring the nature of dark energy (DE), we use forty-three observational Hubble parameter data (OHD) in the redshift range 0 < z ≤slant 2.36 to make a cosmological model-independent test of the ΛCDM model with two-point Omh^2(z2;z1) diagnostic. In ΛCDM model, with equation of state (EoS) w=-1, two-point diagnostic relation Omh^2 ≡ Ωmh^2 is tenable, where Ωm is the present matter density parameter, and h is the Hubble parameter divided by 100 {km s^{-1 Mpc^{-1}}}. We utilize two methods: the weighted mean and median statistics to bin the OHD to increase the signal-to-noise ratio of the measurements. The binning methods turn out to be promising and considered to be robust. By applying the two-point diagnostic to the binned data, we find that although the best-fit values of Omh^2 fluctuate as the continuous redshift intervals change, on average, they are continuous with being constant within 1 σ confidence interval. Therefore, we conclude that the ΛCDM model cannot be ruled out.
Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method.
Lin, Ningning; Meng, Xiaofeng; Nie, Jing
2016-11-18
In this paper, the influence of temperature on quartz crystal microbalance (QCM) sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of -3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability.
Alternative Attitude Commanding and Control for Precise Spacecraft Landing
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2004-01-01
A report proposes an alternative method of control for precision landing on a remote planet. In the traditional method, the attitude of a spacecraft is required to track a commanded translational acceleration vector, which is generated at each time step by solving a two-point boundary value problem. No requirement of continuity is imposed on the acceleration. The translational acceleration does not necessarily vary smoothly. Tracking of a non-smooth acceleration causes the vehicle attitude to exhibit undesirable transients and poor pointing stability behavior. In the alternative method, the two-point boundary value problem is not solved at each time step. A smooth reference position profile is computed. The profile is recomputed only when the control errors get sufficiently large. The nominal attitude is still required to track the smooth reference acceleration command. A steering logic is proposed that controls the position and velocity errors about the reference profile by perturbing the attitude slightly about the nominal attitude. The overall pointing behavior is therefore smooth, greatly reducing the degree of pointing instability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Glenn Edward; Song, Xuehang; Ye, Ming
A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. Themore » spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.« less
A proof of the Woodward-Lawson sampling method for a finite linear array
NASA Technical Reports Server (NTRS)
Somers, Gary A.
1993-01-01
An extension of the continuous aperture Woodward-Lawson sampling theorem has been developed for a finite linear array of equidistant identical elements with arbitrary excitations. It is shown that by sampling the array factor at a finite number of specified points in the far field, the exact array factor over all space can be efficiently reconstructed in closed form. The specified sample points lie in real space and hence are measurable provided that the interelement spacing is greater than approximately one half of a wavelength. This paper provides insight as to why the length parameter used in the sampling formulas for discrete arrays is larger than the physical span of the lattice points in contrast with the continuous aperture case where the length parameter is precisely the physical aperture length.
Huard, Edouard; Derelle, Sophie; Jaeck, Julien; Nghiem, Jean; Haïdar, Riad; Primot, Jérôme
2018-03-05
A challenging point in the prediction of the image quality of infrared imaging systems is the evaluation of the detector modulation transfer function (MTF). In this paper, we present a linear method to get a 2D continuous MTF from sparse spectral data. Within the method, an object with a predictable sparse spatial spectrum is imaged by the focal plane array. The sparse data is then treated to return the 2D continuous MTF with the hypothesis that all the pixels have an identical spatial response. The linearity of the treatment is a key point to estimate directly the error bars of the resulting detector MTF. The test bench will be presented along with measurement tests on a 25 μm pitch InGaAs detector.
Coherent-Anomaly Method in Critical Phenomena. III.
NASA Astrophysics Data System (ADS)
Hu, Xiao; Katori, Makoto; Suzuki, Masuo
Two kinds of systematic mean-field transfer-matrix methods are formulated in the 2-dimensional Ising spin system, by introducing Weiss-like and Bethe-like approximations. All the critical exponents as well as the true critical point can be estimated in these methods following the CAM procedure. The numerical results of the above system are Tc* = 2.271 (J/kB), γ=γ' ≃ 1.749, β≃0.131 and δ ≃ 15.1. The specific heat is confirmed to be continuous and to have a logarithmic divergence at the true critical point, i.e., α=α'=0. Thus, the finite-degree-of-approximation scaling ansatz is shown to be correct and very powerful in practical estimations of the critical exponents as well as the true critical point.
New Architectures for Presenting Search Results Based on Web Search Engines Users Experience
ERIC Educational Resources Information Center
Martinez, F. J.; Pastor, J. A.; Rodriguez, J. V.; Lopez, Rosana; Rodriguez, J. V., Jr.
2011-01-01
Introduction: The Internet is a dynamic environment which is continuously being updated. Search engines have been, currently are and in all probability will continue to be the most popular systems in this information cosmos. Method: In this work, special attention has been paid to the series of changes made to search engines up to this point,…
Continuity and Discontinuity: The Case of Second Couplehood in Old Age
ERIC Educational Resources Information Center
Koren, Chaya
2011-01-01
Purpose: Continuity and discontinuity are controversial concepts in social theories on aging. The aim of this article is to explore these concepts using the experiences of older persons living in second couplehood in old age as a case in point. Design and Method: Based on a larger qualitative study on second couplehood in old age, following the…
Results of continuous synchronous orbit testing of sealed nickel-cadmium cells
NASA Technical Reports Server (NTRS)
Harkness, J. D.
1981-01-01
Test results from continuous synchronous orbit testing of sealed nickel cadmium cells are presented. The synchronous orbit regime simulates a space satellite maintaining a position over a fixed point on earth as the earth rotates on its axis and revolves about the sun. Characteristics of each lot of cells, test conditions, and charge control methods are described.
Simultaneous Detection and Tracking of Pedestrian from Panoramic Laser Scanning Data
NASA Astrophysics Data System (ADS)
Xiao, Wen; Vallet, Bruno; Schindler, Konrad; Paparoditis, Nicolas
2016-06-01
Pedestrian traffic flow estimation is essential for public place design and construction planning. Traditional data collection by human investigation is tedious, inefficient and expensive. Panoramic laser scanners, e.g. Velodyne HDL-64E, which scan surroundings repetitively at a high frequency, have been increasingly used for 3D object tracking. In this paper, a simultaneous detection and tracking (SDAT) method is proposed for precise and automatic pedestrian trajectory recovery. First, the dynamic environment is detected using two different methods, Nearest-point and Max-distance. Then, all the points on moving objects are transferred into a space-time (x, y, t) coordinate system. The pedestrian detection and tracking amounts to assign the points belonging to pedestrians into continuous trajectories in space-time. We formulate the point assignment task as an energy function which incorporates the point evidence, trajectory number, pedestrian shape and motion. A low energy trajectory will well explain the point observations, and have plausible trajectory trend and length. The method inherently filters out points from other moving objects and false detections. The energy function is solved by a two-step optimization process: tracklet detection in a short temporal window; and global tracklet association through the whole time span. Results demonstrate that the proposed method can automatically recover the pedestrians trajectories with accurate positions and low false detections and mismatches.
NASA Astrophysics Data System (ADS)
Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka
Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.
The effect of the synthetic pyrethroid flucythrinate on three non-target invertebrates was evaluated using continual and short-time exposure methods. Both methods show toxic action at measured concentrations 0.100 micrograms/litre. The use of both approaches pointed toward the im...
Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations
NASA Technical Reports Server (NTRS)
Zhao, Yiyuan; Chen, Robert T. N.
1996-01-01
This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
2016-06-15
Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
NASA Astrophysics Data System (ADS)
Liu, Xiaodong
2017-08-01
A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.
NASA Astrophysics Data System (ADS)
Lin, S. T.; Liou, T. S.
2017-12-01
Numerical simulation of groundwater flow in anisotropic aquifers usually suffers from the lack of accuracy of calculating groundwater flux across grid blocks. Conventional two-point flux approximation (TPFA) can only obtain the flux normal to the grid interface but completely neglects the one parallel to it. Furthermore, the hydraulic gradient in a grid block estimated from TPFA can only poorly represent the hydraulic condition near the intersection of grid blocks. These disadvantages are further exacerbated when the principal axes of hydraulic conductivity, global coordinate system, and grid boundary are not parallel to one another. In order to refine the estimation the in-grid hydraulic gradient, several multiple-point flux approximation (MPFA) methods have been developed for two-dimensional groundwater flow simulations. For example, the MPFA-O method uses the hydraulic head at the junction node as an auxiliary variable which is then eliminated using the head and flux continuity conditions. In this study, a three-dimensional MPFA method will be developed for numerical simulation of groundwater flow in three-dimensional and strongly anisotropic aquifers. This new MPFA method first discretizes the simulation domain into hexahedrons. Each hexahedron is further decomposed into a certain number of tetrahedrons. The 2D MPFA-O method is then extended to these tetrahedrons, using the unknown head at the intersection of hexahedrons as an auxiliary variable along with the head and flux continuity conditions to solve for the head at the center of each hexahedron. Numerical simulations using this new MPFA method have been successfully compared with those obtained from a modified version of TOUGH2.
Bifurcation Analysis Using Rigorous Branch and Bound Methods
NASA Technical Reports Server (NTRS)
Smith, Andrew P.; Crespo, Luis G.; Munoz, Cesar A.; Lowenberg, Mark H.
2014-01-01
For the study of nonlinear dynamic systems, it is important to locate the equilibria and bifurcations occurring within a specified computational domain. This paper proposes a new approach for solving these problems and compares it to the numerical continuation method. The new approach is based upon branch and bound and utilizes rigorous enclosure techniques to yield outer bounding sets of both the equilibrium and local bifurcation manifolds. These sets, which comprise the union of hyper-rectangles, can be made to be as tight as desired. Sufficient conditions for the existence of equilibrium and bifurcation points taking the form of algebraic inequality constraints in the state-parameter space are used to calculate their enclosures directly. The enclosures for the bifurcation sets can be computed independently of the equilibrium manifold, and are guaranteed to contain all solutions within the computational domain. A further advantage of this method is the ability to compute a near-maximally sized hyper-rectangle of high dimension centered at a fixed parameter-state point whose elements are guaranteed to exclude all bifurcation points. This hyper-rectangle, which requires a global description of the bifurcation manifold within the computational domain, cannot be obtained otherwise. A test case, based on the dynamics of a UAV subject to uncertain center of gravity location, is used to illustrate the efficacy of the method by comparing it with numerical continuation and to evaluate its computational complexity.
Effect of Receiver Choosing on Point Positions Determination in Network RTK
NASA Astrophysics Data System (ADS)
Bulbul, Sercan; Inal, Cevat
2016-04-01
Nowadays, the developments in GNSS technique allow to determinate point positioning in real time. Initially, point positioning was determined by RTK (Real Time Kinematic) based on a reference station. But, to avoid systematic errors in this method, distance between the reference points and rover receiver must be shorter than10 km. To overcome this restriction in RTK method, the idea of setting more than one reference point had been suggested and, CORS (Continuously Operations Reference Systems) was put into practice. Today, countries like ABD, Germany, Japan etc. have set CORS network. CORS-TR network which has 146 reference points has also been established in 2009 in Turkey. In CORS-TR network, active CORS approach was adopted. In Turkey, CORS-TR reference stations covering whole country are interconnected and, the positions of these stations and atmospheric corrections are continuously calculated. In this study, in a selected point, RTK measurements based on CORS-TR, were made with different receivers (JAVAD TRIUMPH-1, TOPCON Hiper V, MAGELLAN PRoMark 500, PENTAX SMT888-3G, SATLAB SL-600) and with different correction techniques (VRS, FKP, MAC). In the measurements, epoch interval was taken as 5 seconds and measurement time as 1 hour. According to each receiver and each correction technique, means and differences between maximum and minimum values of measured coordinates, root mean squares in the directions of coordinate axis and 2D and 3D positioning precisions were calculated, the results were evaluated by statistical methods and the obtained graphics were interpreted. After evaluation of the measurements and calculations, for each receiver and each correction technique; the coordinate differences between maximum and minimum values were measured to be less than 8 cm, root mean squares in coordinate axis directions less than ±1.5 cm, 2D point positioning precisions less than ±1.5 cm and 3D point positioning precisions less than ±1.5 cm. In the measurement point, it has been concluded that VRS correction technique is generally better than other corrections techniques.
Dynamical analysis of continuous higher-order hopfield networks for combinatorial optimization.
Atencia, Miguel; Joya, Gonzalo; Sandoval, Francisco
2005-08-01
In this letter, the ability of higher-order Hopfield networks to solve combinatorial optimization problems is assessed by means of a rigorous analysis of their properties. The stability of the continuous network is almost completely clarified: (1) hyperbolic interior equilibria, which are unfeasible, are unstable; (2) the state cannot escape from the unitary hypercube; and (3) a Lyapunov function exists. Numerical methods used to implement the continuous equation on a computer should be designed with the aim of preserving these favorable properties. The case of nonhyperbolic fixed points, which occur when the Hessian of the target function is the null matrix, requires further study. We prove that these nonhyperbolic interior fixed points are unstable in networks with three neurons and order two. The conjecture that interior equilibria are unstable in the general case is left open.
NASA Astrophysics Data System (ADS)
Wang, Aiming; Cheng, Xiaohan; Meng, Guoying; Xia, Yun; Wo, Lei; Wang, Ziyi
2017-03-01
Identification of rotor unbalance is critical for normal operation of rotating machinery. The single-disc and single-span rotor, as the most fundamental rotor-bearing system, has attracted research attention over a long time. In this paper, the continuous single-disc and single-span rotor is modeled as a homogeneous and elastic Euler-Bernoulli beam, and the forces applied by bearings and disc on the shaft are considered as point forces. A fourth-order non-homogeneous partial differential equation set with homogeneous boundary condition is solved for analytical solution, which expresses the unbalance response as a function of position, rotor unbalance and the stiffness and damping coefficients of bearings. Based on this analytical method, a novel Measurement Point Vector Method (MPVM) is proposed to identify rotor unbalance while operating. Only a measured unbalance response registered for four selected cross-sections of the rotor-shaft under steady-state operating conditions is needed when using the method. Numerical simulation shows that the detection error of the proposed method is very small when measurement error is negligible. The proposed method provides an efficient way for rotor balancing without test runs and external excitations.
Research study on stabilization and control: Modern sampled data control theory
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.; Yackel, R. A.
1973-01-01
A numerical analysis of spacecraft stability parameters was conducted. The analysis is based on a digital approximation by point by point state comparison. The technique used is that of approximating a continuous data system by a sampled data model by comparison of the states of the two systems. Application of the method to the digital redesign of the simplified one axis dynamics of the Skylab is presented.
Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl
2014-01-01
Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420
Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method
Lin, Ningning; Meng, Xiaofeng; Nie, Jing
2016-01-01
In this paper, the influence of temperature on quartz crystal microbalance (QCM) sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of −3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability. PMID:27869746
NASA Astrophysics Data System (ADS)
Atik, L.; Petit, P.; Sawicki, J. P.; Ternifi, Z. T.; Bachir, G.; Della, M.; Aillerie, M.
2017-02-01
Solar panels have a nonlinear voltage-current characteristic, with a distinct maximum power point (MPP), which depends on the environmental factors, such as temperature and irradiation. In order to continuously harvest maximum power from the solar panels, they have to operate at their MPP despite the inevitable changes in the environment. Various methods for maximum power point tracking (MPPT) were developed and finally implemented in solar power electronic controllers to increase the efficiency in the electricity production originate from renewables. In this paper we compare using Matlab tools Simulink, two different MPP tracking methods, which are, fuzzy logic control (FL) and sliding mode control (SMC), considering their efficiency in solar energy production.
Electrochemical method of producing eutectic uranium alloy and apparatus
Horton, James A.; Hayden, H. Wayne
1995-01-01
An apparatus and method for continuous production of liquid uranium alloys through the electrolytic reduction of uranium chlorides. The apparatus includes an electrochemical cell formed from an anode shaped to form an electrolyte reservoir, a cathode comprising a metal, such as iron, capable of forming a eutectic uranium alloy having a melting point less than the melting point of pure uranium, and molten electrolyte in the reservoir comprising a chlorine or fluorine containing salt and uranium chloride. The method of the invention produces an eutectic uranium alloy by creating an electrolyte reservoir defined by a container comprising an anode, placing an electrolyte in the reservoir, the electrolyte comprising a chlorine or fluorine containing salt and uranium chloride in molten form, positioning a cathode in the reservoir where the cathode comprises a metal capable of forming an uranium alloy having a melting point less than the melting point of pure uranium, and applying a current between the cathode and the anode.
Electrochemical method of producing eutectic uranium alloy and apparatus
Horton, J.A.; Hayden, H.W.
1995-01-10
An apparatus and method are disclosed for continuous production of liquid uranium alloys through the electrolytic reduction of uranium chlorides. The apparatus includes an electrochemical cell formed from an anode shaped to form an electrolyte reservoir, a cathode comprising a metal, such as iron, capable of forming a eutectic uranium alloy having a melting point less than the melting point of pure uranium, and molten electrolyte in the reservoir comprising a chlorine or fluorine containing salt and uranium chloride. The method of the invention produces an eutectic uranium alloy by creating an electrolyte reservoir defined by a container comprising an anode, placing an electrolyte in the reservoir, the electrolyte comprising a chlorine or fluorine containing salt and uranium chloride in molten form, positioning a cathode in the reservoir where the cathode comprises a metal capable of forming an uranium alloy having a melting point less than the melting point of pure uranium, and applying a current between the cathode and the anode. 2 figures.
Approach to the origin of turbulence on the basis of two-point kinetic theory
NASA Technical Reports Server (NTRS)
Tsuge, S.
1974-01-01
Equations for the fluctuation correlation in an incompressible shear flow are derived on the basis of kinetic theory, utilizing the two-point distribution function which obeys the BBGKY hierarchy equation truncated with the hypothesis of 'ternary' molecular chaos. The step from the molecular to the hydrodynamic description is accomplished by a moment expansion which is a two-point version of the thirteen-moment method, and which leads to a series of correlation equations, viz., the two-point counterparts of the continuity equation, the Navier-Stokes equation, etc. For almost parallel shearing flows the two-point equation is separable and reduces to two Orr-Sommerfeld equations with different physical implications.
Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation
NASA Astrophysics Data System (ADS)
Jamelot, Erell; Ciarlet, Patrick
2013-05-01
Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.
Numerical simulation using vorticity-vector potential formulation
NASA Technical Reports Server (NTRS)
Tokunaga, Hiroshi
1993-01-01
An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the computational method. We present the computational method and apply the present method to computations of flows in a square cavity at large Reynolds number in order to investigate its effectiveness.
Eigensensitivity analysis of rotating clamped uniform beams with the asymptotic numerical method
NASA Astrophysics Data System (ADS)
Bekhoucha, F.; Rechak, S.; Cadou, J. M.
2016-12-01
In this paper, free vibrations of a rotating clamped Euler-Bernoulli beams with uniform cross section are studied using continuation method, namely asymptotic numerical method. The governing equations of motion are derived using Lagrange's method. The kinetic and strain energy expression are derived from Rayleigh-Ritz method using a set of hybrid variables and based on a linear deflection assumption. The derived equations are transformed in two eigenvalue problems, where the first is a linear gyroscopic eigenvalue problem and presents the coupled lagging and stretch motions through gyroscopic terms. While the second is standard eigenvalue problem and corresponds to the flapping motion. Those two eigenvalue problems are transformed into two functionals treated by continuation method, the Asymptotic Numerical Method. New method proposed for the solution of the linear gyroscopic system based on an augmented system, which transforms the original problem to a standard form with real symmetric matrices. By using some techniques to resolve these singular problems by the continuation method, evolution curves of the natural frequencies against dimensionless angular velocity are determined. At high angular velocity, some singular points, due to the linear elastic assumption, are computed. Numerical tests of convergence are conducted and the obtained results are compared to the exact values. Results obtained by continuation are compared to those computed with discrete eigenvalue problem.
NASA Astrophysics Data System (ADS)
Haritan, Idan; Moiseyev, Nimrod
2017-07-01
Resonances play a major role in a large variety of fields in physics and chemistry. Accordingly, there is a growing interest in methods designed to calculate them. Recently, Landau et al. proposed a new approach to analytically dilate a single eigenvalue from the stabilization graph into the complex plane. This approach, termed Resonances Via Padé (RVP), utilizes the Padé approximant and is based on a unique analysis of the stabilization graph. Yet, analytic continuation of eigenvalues from the stabilization graph into the complex plane is not a new idea. In 1975, Jordan suggested an analytic continuation method based on the branch point structure of the stabilization graph. The method was later modified by McCurdy and McNutt, and it is still being used today. We refer to this method as the Truncated Characteristic Polynomial (TCP) method. In this manuscript, we perform an in-depth comparison between the RVP and the TCP methods. We demonstrate that while both methods are important and complementary, the advantage of one method over the other is problem-dependent. Illustrative examples are provided in the manuscript.
Continuation of probability density functions using a generalized Lyapunov approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, S., E-mail: s.baars@rug.nl; Viebahn, J.P., E-mail: viebahn@cwi.nl; Mulder, T.E., E-mail: t.e.mulder@uu.nl
Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Charlene; Wiseman, Howard; Jacobs, Kurt
2004-08-01
It was shown by Ahn, Wiseman, and Milburn [Phys. Rev. A 67, 052310 (2003)] that feedback control could be used as a quantum error correction process for errors induced by weak continuous measurement, given one perfectly measured error channel per qubit. Here we point out that this method can be easily extended to an arbitrary number of error channels per qubit. We show that the feedback protocols generated by our method encode n-2 logical qubits in n physical qubits, thus requiring just one more physical qubit than in the previous case.
Material point method of modelling and simulation of reacting flow of oxygen
NASA Astrophysics Data System (ADS)
Mason, Matthew; Chen, Kuan; Hu, Patrick G.
2014-07-01
Aerospace vehicles are continually being designed to sustain flight at higher speeds and higher altitudes than previously attainable. At hypersonic speeds, gases within a flow begin to chemically react and the fluid's physical properties are modified. It is desirable to model these effects within the Material Point Method (MPM). The MPM is a combined Eulerian-Lagrangian particle-based solver that calculates the physical properties of individual particles and uses a background grid for information storage and exchange. This study introduces chemically reacting flow modelling within the MPM numerical algorithm and illustrates a simple application using the AeroElastic Material Point Method (AEMPM) code. The governing equations of reacting flows are introduced and their direct application within an MPM code is discussed. A flow of 100% oxygen is illustrated and the results are compared with independently developed computational non-equilibrium algorithms. Observed trends agree well with results from an independently developed source.
Semiautomated skeletonization of the pulmonary arterial tree in micro-CT images
NASA Astrophysics Data System (ADS)
Hanger, Christopher C.; Haworth, Steven T.; Molthen, Robert C.; Dawson, Christopher A.
2001-05-01
We present a simple and robust approach that utilizes planar images at different angular rotations combined with unfiltered back-projection to locate the central axes of the pulmonary arterial tree. Three-dimensional points are selected interactively by the user. The computer calculates a sub- volume unfiltered back-projection orthogonal to the vector connecting the two points and centered on the first point. Because more x-rays are absorbed at the thickest portion of the vessel, in the unfiltered back-projection, the darkest pixel is assumed to be the center of the vessel. The computer replaces this point with the newly computer-calculated point. A second back-projection is calculated around the original point orthogonal to a vector connecting the newly-calculated first point and user-determined second point. The darkest pixel within the reconstruction is determined. The computer then replaces the second point with the XYZ coordinates of the darkest pixel within this second reconstruction. Following a vector based on a moving average of previously determined 3- dimensional points along the vessel's axis, the computer continues this skeletonization process until stopped by the user. The computer estimates the vessel diameter along the set of previously determined points using a method similar to the full width-half max algorithm. On all subsequent vessels, the process works the same way except that at each point, distances between the current point and all previously determined points along different vessels are determined. If the difference is less than the previously estimated diameter, the vessels are assumed to branch. This user/computer interaction continues until the vascular tree has been skeletonized.
Bhaya, Amit; Kaszkurewicz, Eugenius
2004-01-01
It is pointed out that the so called momentum method, much used in the neural network literature as an acceleration of the backpropagation method, is a stationary version of the conjugate gradient method. Connections with the continuous optimization method known as heavy ball with friction are also made. In both cases, adaptive (dynamic) choices of the so called learning rate and momentum parameters are obtained using a control Liapunov function analysis of the system.
NASA Astrophysics Data System (ADS)
Renson, Ludovic; Barton, David A. W.; Neild, Simon A.
Control-based continuation (CBC) is a means of applying numerical continuation directly to a physical experiment for bifurcation analysis without the use of a mathematical model. CBC enables the detection and tracking of bifurcations directly, without the need for a post-processing stage as is often the case for more traditional experimental approaches. In this paper, we use CBC to directly locate limit-point bifurcations of a periodically forced oscillator and track them as forcing parameters are varied. Backbone curves, which capture the overall frequency-amplitude dependence of the system’s forced response, are also traced out directly. The proposed method is demonstrated on a single-degree-of-freedom mechanical system with a nonlinear stiffness characteristic. Results are presented for two configurations of the nonlinearity — one where it exhibits a hardening stiffness characteristic and one where it exhibits softening-hardening.
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.
SAGE: The Self-Adaptive Grid Code. 3
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1999-01-01
The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.
Terrestrial laser scanning used to detect asymmetries in boat hulls
NASA Astrophysics Data System (ADS)
Roca-Pardiñas, Javier; López-Alvarez, Francisco; Ordóñez, Celestino; Menéndez, Agustín; Bernardo-Sánchez, Antonio
2012-01-01
We describe a methodology for identifying asymmetries in boat hull sections reconstructed from point clouds captured using a terrestrial laser scanner (TLS). A surface was first fit to the point cloud using a nonparametric regression method that permitted the construction of a continuous smooth surface. Asymmetries in cross-sections of the surface were identified using a bootstrap resampling technique that took into account uncertainty in the coordinates of the scanned points. Each reconstructed section was analyzed to check, for a given level of significance, that it was within the confidence interval for the theoretical symmetrical section. The method was applied to the study of asymmetries in a medium-sized yacht. Identified were differences of up to 5 cm between the real and theoretical sections in some parts of the hull.
Fast RBF OGr for solving PDEs on arbitrary surfaces
NASA Astrophysics Data System (ADS)
Piret, Cécile; Dunn, Jarrett
2016-10-01
The Radial Basis Functions Orthogonal Gradients method (RBF-OGr) was introduced in [1] to discretize differential operators defined on arbitrary manifolds defined only by a point cloud. We take advantage of the meshfree character of RBFs, which give us a high accuracy and the flexibility to represent complex geometries in any spatial dimension. A large limitation of the RBF-OGr method was its large computational complexity, which greatly restricted the size of the point cloud. In this paper, we apply the RBF-Finite Difference (RBF-FD) technique to the RBF-OGr method for building sparse differentiation matrices discretizing continuous differential operators such as the Laplace-Beltrami operator. This method can be applied to solving PDEs on arbitrary surfaces embedded in ℛ3. We illustrate the accuracy of our new method by solving the heat equation on the unit sphere.
A new scoring method for evaluating the performance of earthquake forecasts and predictions
NASA Astrophysics Data System (ADS)
Zhuang, J.
2009-12-01
This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
1982-08-01
DATA NUMBER OF POINTS 1988 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 PS3 -218.12 294.77 3 T3 -341.54 738.15 4 T5 -464.78 623.47 5 PT51 12.317...Continued) CRUISE AND TAKE-OFF MODE DATA I NUMBER OF POINTS 4137 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 P53 -218.12 376.60 3 T3 -482.72
Effect of Reynolds number and turbulence on airfoil aerodynamics at -90-degree incidence
NASA Technical Reports Server (NTRS)
Stremel, Paul M.
1994-01-01
A method has been developed for calculating the viscous flow about airfoils with and without deflected flaps at -90 deg incidence. This method provides for the solution of the unsteady incompressible Navier-Stokes equations by means of an implicit technique. The solution is calculated on a body-fitted computational mesh using a staggered-grid method. The vorticity is defined at the node points, and the velocity components are defined at the mesh-cell sides. The staggered-grid orientation provides for accurate representation of vorticity at the node points and the continuity equation at the mesh-cell centers. The method provides for the noniterative solution of the flowfield and satisfies the continuity equation to machine zero at each time step. The method is evaluated in terms of its stability to predict two-dimensional flow about an airfoil at -90-deg incidence for varying Reynolds number and laminar/turbulent models. The variations of the average loading and surface pressure distribution due to flap deflection, Reynolds number, and laminar or turbulent flow are presented and compared with experimental results. The comparisom indicate that the calculated drag and drag reduction caused by flap deflection and the calculated average surface pressure are in excellent agreement with the measured results at a similar Reynolds number.
Modeling the solute transport by particle-tracing method with variable weights
NASA Astrophysics Data System (ADS)
Jiang, J.
2016-12-01
Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.
2014-01-01
Background Continuity in the context of healthcare refers to the perception of the client that care has been connected and coherent over time. For over a decade professionals providing maternity and child and family health (CFH) services in Australia and internationally have emphasised the importance of continuity of care for women, families and children. However, continuity across maternity and CFH services remains elusive. Continuity is defined and implemented in different ways, resulting in fragmentation of care particularly at points of transition from one service or professional to another. This paper examines the concept of continuity across the maternity and CFH service continuum from the perspectives of midwifery, CFH nursing, general practitioner (GP) and practice nurse (PN) professional leaders. Methods Data were collected as part of a three phase mixed methods study investigating the feasibility of implementing a national approach to CFH services in Australia (CHoRUS study). Representatives from the four participating professional groups were consulted via discussion groups, focus groups and e-conversations, which were recorded and transcribed. In total, 132 professionals participated, including 45 midwives, 60 CFH nurses, 15 general practitioners and 12 practice nurses. Transcripts were analysed using a thematic approach. Results ‘Continuity’ was used and applied differently within and across groups. Aspects of care most valued by professionals included continuity preferably characterised by the development of a relationship with the family (relational continuity) and good communication (informational continuity). When considering managerial continuity we found professionals’ were most concerned with co-ordination of care within their own service, rather than focusing on the co-ordination between services. Conclusion These findings add new perspectives to understanding continuity within the maternity and CFH services continuum of care. All health professionals consulted were committed to a smooth journey for families along the continuum. Commitment to collaboration is required if service gaps are to be addressed particularly at the point of transition of care between services which was found to be particularly problematic. PMID:24387686
Numerical algorithms for computations of feedback laws arising in control of flexible systems
NASA Technical Reports Server (NTRS)
Lasiecka, Irena
1989-01-01
Several continuous models will be examined, which describe flexible structures with boundary or point control/observation. Issues related to the computation of feedback laws are examined (particularly stabilizing feedbacks) with sensors and actuators located either on the boundary or at specific point locations of the structure. One of the main difficulties is due to the great sensitivity of the system (hyperbolic systems with unbounded control actions), with respect to perturbations caused either by uncertainty of the model or by the errors introduced in implementing numerical algorithms. Thus, special care must be taken in the choice of the appropriate numerical schemes which eventually lead to implementable finite dimensional solutions. Finite dimensional algorithms are constructed on a basis of a priority analysis of the properties of the original, continuous (infinite diversional) systems with the following criteria in mind: (1) convergence and stability of the algorithms and (2) robustness (reasonable insensitivity with respect to the unknown parameters of the systems). Examples with mixed finite element methods and spectral methods are provided.
Design of optimal impulse transfers from the Sun-Earth libration point to asteroid
NASA Astrophysics Data System (ADS)
Wang, Yamin; Qiao, Dong; Cui, Pingyuan
2015-07-01
The lunar probe, Chang'E-2, is the first one to successfully achieve both the transfer to Sun-Earth libration point orbit and the flyby of near-Earth asteroid Toutatis. This paper, taking the Chang'E-2's asteroid flyby mission as an example, provides a method to design low-energy transfers from the libration point orbit to an asteroid. The method includes the analysis of transfer families and the design of optimal impulse transfers. Firstly, the one-impulse transfers are constructed by correcting the initial guesses, which are obtained by perturbing in the direction of unstable eigenvector. Secondly, the optimality of one-impulse transfers is analyzed and the optimal impulse transfers are built by using the primer vector theory. After optimization, the transfer families, including the slow and the fast transfers, are refined to be continuous and lower-cost transfers. The method proposed in this paper can be also used for designing transfers from an arbitrary Sun-Earth libration point orbit to a near-Earth asteroid in the Sun-Earth-Moon system.
Code of Federal Regulations, 2010 CFR
2010-07-01
... select sampling port locations and the number of traverse points. Sampling ports must be located at the... Method 25 (40 CFR part 60, appendix A), milligrams per dry standard cubic meters (mg/dscm) for each day... = Conversion factor (mg/lb); and K = Daily production rate of sinter, tons/hr. (4) Continue the sampling and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... select sampling port locations and the number of traverse points. Sampling ports must be located at the... Method 25 (40 CFR part 60, appendix A), milligrams per dry standard cubic meters (mg/dscm) for each day... = Conversion factor (mg/lb); and K = Daily production rate of sinter, tons/hr. (4) Continue the sampling and...
Continuous Control Artificial Potential Function Methods and Optimal Control
2014-03-27
21 CW Clohessy - Wiltshire . . . . . . . . . . . . . . . . . . . . . . 26 CV Chase Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . 26 TV Target... Clohessy - Wiltshire equa- tions2) until the time rate of change of potential became nonnegative. At that time, a thrust impulse was applied to make the...3.2. 2The Clohessy - Wiltshire equations are introduced in Section 3.5. 7 to eliminate oscillation around the goal point [8, 9]. Such a method is
Guyennon, Nicolas; Cerretto, Giancarlo; Tavella, Patrizia; Lahaye, François
2009-08-01
In recent years, many national timing laboratories have installed geodetic Global Positioning System receivers together with their traditional GPS/GLONASS Common View receivers and Two Way Satellite Time and Frequency Transfer equipment. Many of these geodetic receivers operate continuously within the International GNSS Service (IGS), and their data are regularly processed by IGS Analysis Centers. From its global network of over 350 stations and its Analysis Centers, the IGS generates precise combined GPS ephemeredes and station and satellite clock time series referred to the IGS Time Scale. A processing method called Precise Point Positioning (PPP) is in use in the geodetic community allowing precise recovery of GPS antenna position, clock phase, and atmospheric delays by taking advantage of these IGS precise products. Previous assessments, carried out at Istituto Nazionale di Ricerca Metrologica (INRiM; formerly IEN) with a PPP implementation developed at Natural Resources Canada (NRCan), showed PPP clock solutions have better stability over short/medium term than GPS CV and GPS P3 methods and significantly reduce the day-boundary discontinuities when used in multi-day continuous processing, allowing time-limited, campaign-style time-transfer experiments. This paper reports on follow-on work performed at INRiM and NRCan to further characterize and develop the PPP method for time transfer applications, using data from some of the National Metrology Institutes. We develop a processing procedure that takes advantage of the improved stability of the phase-connected multi-day PPP solutions while allowing the generation of continuous clock time series, more applicable to continuous operation/monitoring of timing equipment.
Method of making a continuous ceramic fiber composite hot gas filter
Hill, Charles A.; Wagner, Richard A.; Komoroski, Ronald G.; Gunter, Greg A.; Barringer, Eric A.; Goettler, Richard W.
1999-01-01
A ceramic fiber composite structure particularly suitable for use as a hot gas cleanup ceramic fiber composite filter and method of making same from ceramic composite material has a structure which provides for increased strength and toughness in high temperature environments. The ceramic fiber composite structure or filter is made by a process in which a continuous ceramic fiber is intimately surrounded by discontinuous chopped ceramic fibers during manufacture to produce a ceramic fiber composite preform which is then bonded using various ceramic binders. The ceramic fiber composite preform is then fired to create a bond phase at the fiber contact points. Parameters such as fiber tension, spacing, and the relative proportions of the continuous ceramic fiber and chopped ceramic fibers can be varied as the continuous ceramic fiber and chopped ceramic fiber are simultaneously formed on the porous vacuum mandrel to obtain a desired distribution of the continuous ceramic fiber and the chopped ceramic fiber in the ceramic fiber composite structure or filter.
Cavity master equation for the continuous time dynamics of discrete-spin models.
Aurell, E; Del Ferraro, G; Domínguez, E; Mulet, R
2017-05-01
We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.
Cavity master equation for the continuous time dynamics of discrete-spin models
NASA Astrophysics Data System (ADS)
Aurell, E.; Del Ferraro, G.; Domínguez, E.; Mulet, R.
2017-05-01
We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.
Park, Jeong-Hoon; Hong, Ji-Yeon; Jang, Hyun Chul; Oh, Seung Geun; Kim, Sang-Hyoun; Yoon, Jeong-Jun; Kim, Yong Jin
2012-03-01
A facile continuous method for dilute-acid hydrolysis of the representative red seaweed species, Gelidium amansii was developed and its hydrolysate was subsequently evaluated for fermentability. In the hydrolysis step, the hydrolysates obtained from a batch reactor and a continuous reactor were systematically compared based on fermentable sugar yield and inhibitor formation. There are many advantages to the continuous hydrolysis process. For example, the low melting point of the agar component in G. amansii facilitates improved raw material fluidity in the continuous reactor. In addition, the hydrolysate obtained from the continuous process delivered a high sugar and low inhibitor concentration, thereby leading to both high yield and high final ethanol titer in the fermentation process. Copyright © 2011 Elsevier Ltd. All rights reserved.
Grid Effect on Spherical Shallow Water Jets Using Continuous and Discontinuous Galerkin Methods
2013-01-01
The high-order Legendre -Gauss-Lobatto (LGL) points are added to the linear grid by projecting the linear elements onto the auxiliary gnomonic space...mapping, the triangles are subdivided into smaller ones by a Lagrange polynomial of order nI . The number of quadrilateral elements and grid points of...of the acceleration of gravity and the vertical height of the fluid), ν∇2 is the artificial viscosity term of viscous coefficient ν = 1× 105 m2 s−1
Streamline-based microfluidic device
NASA Technical Reports Server (NTRS)
Tai, Yu-Chong (Inventor); Zheng, Siyang (Inventor); Kasdan, Harvey (Inventor)
2013-01-01
The present invention provides a streamline-based device and a method for using the device for continuous separation of particles including cells in biological fluids. The device includes a main microchannel and an array of side microchannels disposed on a substrate. The main microchannel has a plurality of stagnation points with a predetermined geometric design, for example, each of the stagnation points has a predetermined distance from the upstream edge of each of the side microchannels. The particles are separated and collected in the side microchannels.
An adaptive grid scheme using the boundary element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munipalli, R.; Anderson, D.A.
1996-09-01
A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less
Superiorization with level control
NASA Astrophysics Data System (ADS)
Cegielski, Andrzej; Al-Musallam, Fadhel
2017-04-01
The convex feasibility problem is to find a common point of a finite family of closed convex subsets. In many applications one requires something more, namely finding a common point of closed convex subsets which minimizes a continuous convex function. The latter requirement leads to an application of the superiorization methodology which is actually settled between methods for convex feasibility problem and the convex constrained minimization. Inspired by the superiorization idea we introduce a method which sequentially applies a long-step algorithm for a sequence of convex feasibility problems; the method employs quasi-nonexpansive operators as well as subgradient projections with level control and does not require evaluation of the metric projection. We replace a perturbation of the iterations (applied in the superiorization methodology) by a perturbation of the current level in minimizing the objective function. We consider the method in the Euclidean space in order to guarantee the strong convergence, although the method is well defined in a Hilbert space.
Yu, Hui; Qi, Dan; Li, Heng-da; Xu, Ke-xin; Yuan, Wei-jie
2012-03-01
Weak signal, low instrument signal-to-noise ratio, continuous variation of human physiological environment and the interferences from other components in blood make it difficult to extract the blood glucose information from near infrared spectrum in noninvasive blood glucose measurement. The floating-reference method, which analyses the effect of glucose concentration variation on absorption coefficient and scattering coefficient, gets spectrum at the reference point and the measurement point where the light intensity variations from absorption and scattering are counteractive and biggest respectively. By using the spectrum from reference point as reference, floating-reference method can reduce the interferences from variation of physiological environment and experiment circumstance. In the present paper, the effectiveness of floating-reference method working on improving prediction precision and stability was assessed through application experiments. The comparison was made between models whose data were processed with and without floating-reference method. The results showed that the root mean square error of prediction (RMSEP) decreased by 34.7% maximally. The floating-reference method could reduce the influences of changes of samples' state, instrument noises and drift, and improve the models' prediction precision and stability effectively.
Continuation of periodic orbits in the Sun-Mercury elliptic restricted three-body problem
NASA Astrophysics Data System (ADS)
Peng, Hao; Bai, Xiaoli; Xu, Shijie
2017-06-01
Starting from resonant Halo orbits in the Circular Restricted Three-Body Problem (CRTBP), Multi-revolution Elliptic Halo (ME-Halo) orbits around L1 and L2 points in the Sun-Mercury Elliptic Restricted Three-Body Problem (ERTBP) are generated systematically. Three pairs of resonant parameters M5N2, M7N3 and M9N4 are tested. The first pair shows special features and is investigated in detail. Three separated characteristic curves of periodic orbit around each libration point are obtained, showing the eccentricity varies non-monotonically along these curves. The eccentricity of the Sun-Mercury system can be achieved by continuation method in just a few cases. The stability analysis shows that these orbits are all unstable and the complex instability occurs with certain parameters. This paper shows new periodic orbits in both the CRTBP and the ERTBP. Totally four periodic orbits with parameters M5N2 around each libration points are extracted in the Sun-Mercury ERTBP.
Apparatus for producing laser targets
Jarboe, T.R.; Baker, W.R.
1975-09-23
This patent relates to an apparatus and method for producing deuterium targets or pellets of 25u to 75u diameter. The pellets are sliced from a continuously spun solid deuterium thread at a rate of up to 10 pellets/second. The pellets after being sliced from the continuous thread of deuterium are collimated and directed to a point of use, such as a laser activated combustion or explosion chamber wherein the pellets are imploded by laser energy or laser produced target plasmas for neutral beam injection. (auth)
Optimization of pressure probe placement and data analysis of engine-inlet distortion
NASA Astrophysics Data System (ADS)
Walter, S. F.
The purpose of this research is to examine methods by which quantification of inlet flow distortion may be improved upon. Specifically, this research investigates how data interpolation effects results, optimizing sampling locations of the flow, and determining the sensitivity related to how many sample locations there are. The main parameters that are indicative of a "good" design are total pressure recovery, mass flow capture, and distortion. This work focuses on the total pressure distortion, which describes the amount of non-uniformity that exists in the flow as it enters the engine. All engines must tolerate some level of distortion, however too much distortion can cause the engine to stall or the inlet to unstart. Flow distortion is measured at the interface between the inlet and the engine. To determine inlet flow distortion, a combination of computational and experimental pressure data is generated and then collapsed into an index that indicates the amount of distortion. Computational simulations generate continuous contour maps, but experimental data is discrete. Researchers require continuous contour maps to evaluate the overall distortion pattern. There is no guidance on how to best manipulate discrete points into a continuous pattern. Using one experimental, 320 probe data set and one, 320 point computational data set with three test runs each, this work compares the pressure results obtained using all 320 points of data from the original sets, both quantitatively and qualitatively, with results derived from selecting 40 grid point subsets and interpolating to 320 grid points. Each of the two, 40 point sets were interpolated to 320 grid points using four different interpolation methods in an attempt to establish the best method for interpolating small sets of data into an accurate, continuous contour map. Interpolation methods investigated are bilinear, spline, and Kriging in Cartesian space, as well as angular in polar space. Spline interpolation methods should be used as they result in the most accurate, precise, and visually correct predictions when compared results achieved from the full data sets. Researchers were interested if fewer than the recommended 40 probes could be used - especially when placed in areas of high interest - but still obtain equivalent or better results. For this investigation, the computational results from a two-dimensional inlet and experimental results of an axisymmetric inlet were used. To find the areas of interest, a uniform sampling of all possible locations was run through a Monte Carlo simulation with a varying number of probes. A probability density function of the resultant distortion index was plotted. Certain probes are required to come within the desired accuracy level of the distortion index based on the full data set. For the experimental results, all three test cases could be characterized with 20 probes. For the axisymmetric inlet, placing 40 probes in select locations could get the results for parameters of interest within less than 10% of the exact solution for almost all cases. For the two dimensional inlet, the results were not as clear. 80 probes were required to get within 10% of the exact solution for all run numbers, although this is largely due to the small value of the exact result. The sensitivity of each probe added to the experiment was analyzed. Instead of looking at the overall pattern established by optimizing probe placements, the focus is on varying the number of sampled probes from 20 to 40. The number of points falling within a 1% tolerance band of the exact solution were counted as good points. The results were normalized for each data set and a general sensitivity function was found to determine the sensitivity of the results. A linear regression was used to generalize the results for all data sets used in this work. However, they can be used by directly comparing the number of good points obtained with various numbers of probes as well. The sensitivity in the results is higher when fewer probes are used and gradually tapers off near 40 probes. There is a bigger gain in good points when the number of probes is increased from 20 to 21 probes than from 39 to 40 probes.
Optimizing Web-Based Instruction: A Case Study Using Poultry Processing Unit Operations
ERIC Educational Resources Information Center
O' Bryan, Corliss A.; Crandall, Philip G.; Shores-Ellis, Katrina; Johnson, Donald M.; Ricke, Steven C.; Marcy, John
2009-01-01
Food companies and supporting industries need inexpensive, revisable training methods for large numbers of hourly employees due to continuing improvements in Hazard Analysis Critical Control Point (HACCP) programs, new processing equipment, and high employee turnover. HACCP-based food safety programs have demonstrated their value by reducing the…
40 CFR 60.54 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... at each point for equal increments of time. (2) Excess air measurements may be used to determine the... Section 60.54 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... “adjusted” CO2 concentration [(%CO2)adj], which accounts for the effects of CO2 absorption and dilution air...
NASA Astrophysics Data System (ADS)
Su, Yun-Ting; Hu, Shuowen; Bethel, James S.
2017-05-01
Light detection and ranging (LIDAR) has become a widely used tool in remote sensing for mapping, surveying, modeling, and a host of other applications. The motivation behind this work is the modeling of piping systems in industrial sites, where cylinders are the most common primitive or shape. We focus on cylinder parameter estimation in three-dimensional point clouds, proposing a mathematical formulation based on angular distance to determine the cylinder orientation. We demonstrate the accuracy and robustness of the technique on synthetically generated cylinder point clouds (where the true axis orientation is known) as well as on real LIDAR data of piping systems. The proposed algorithm is compared with a discrete space Hough transform-based approach as well as a continuous space inlier approach, which iteratively discards outlier points to refine the cylinder parameter estimates. Results show that the proposed method is more computationally efficient than the Hough transform approach and is more accurate than both the Hough transform approach and the inlier method.
Bui, H N; Bogers, J P A M; Cohen, D; Njo, T; Herruer, M H
2016-12-01
We evaluated the performance of the HemoCue WBC DIFF, a point-of-care device for total and differential white cell count, primarily to test its suitability for the mandatory white blood cell monitoring in clozapine use. Leukocyte count and 5-part differentiation was performed by the point-of-care device and by routine laboratory method in venous EDTA-blood samples from 20 clozapine users, 20 neutropenic patients, and 20 healthy volunteers. From the volunteers, also a capillary sample was drawn. Intra-assay reproducibility and drop-to-drop variation were tested. The correlation between both methods in venous samples was r > 0.95 for leukocyte, neutrophil, and lymphocyte counts. The correlation between point-of-care (capillary sample) and routine (venous sample) methods for these cells was 0.772; 0.817 and 0.798, respectively. Only for leukocyte and neutrophil counts, the intra-assay reproducibility was sufficient. The point-of-care device can be used to screen for leukocyte and neutrophil counts. Because of the relatively high measurement uncertainty and poor correlation with venous samples, we recommend to repeat the measurement with a venous sample if cell counts are in the lower reference range. In case of clozapine therapy, neutropenia can probably be excluded if high neutrophil counts are found and patients can continue their therapy. © 2016 John Wiley & Sons Ltd.
Restoring method for missing data of spatial structural stress monitoring based on correlation
NASA Astrophysics Data System (ADS)
Zhang, Zeyu; Luo, Yaozhi
2017-07-01
Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.
Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.
1981-01-01
To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.
NASA Astrophysics Data System (ADS)
Nieuwoudt, Michel K.; Holroyd, Steve E.; McGoverin, Cushla M.; Simpson, M. Cather; Williams, David E.
2017-02-01
Point-of-care diagnostics are of interest in the medical, security and food industry, the latter particularly for screening food adulterated for economic gain. Milk adulteration continues to be a major problem worldwide and different methods to detect fraudulent additives have been investigated for over a century. Laboratory based methods are limited in their application to point-of-collection diagnosis and also require expensive instrumentation, chemicals and skilled technicians. This has encouraged exploration of spectroscopic methods as more rapid and inexpensive alternatives. Raman spectroscopy has excellent potential for screening of milk because of the rich complexity inherent in its signals. The rapid advances in photonic technologies and fabrication methods are enabling increasingly sensitive portable mini-Raman systems to be placed on the market that are both affordable and feasible for both point-of-care and point-of-collection applications. We have developed a powerful spectroscopic method for rapidly screening liquid milk for sucrose and four nitrogen-rich adulterants (dicyandiamide (DCD), ammonium sulphate, melamine, urea), using a combined system: a small, portable Raman spectrometer with focusing fibre optic probe and optimized reflective focusing wells, simply fabricated in aluminium. The reliable sample presentation of this system enabled high reproducibility of 8% RSD (residual standard deviation) within four minutes. Limit of detection intervals for PLS calibrations ranged between 140 - 520 ppm for the four N-rich compounds and between 0.7 - 3.6 % for sucrose. The portability of the system and reliability and reproducibility of this technique opens opportunities for general, reagentless adulteration screening of biological fluids as well as milk, at point-of-collection.
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
NASA Astrophysics Data System (ADS)
Wang, Weixing; Wang, Zhiwei; Han, Ya; Li, Shuang; Zhang, Xin
2015-03-01
In order to ensure safety, long term stability and quality control in modern tunneling operations, the acquisition of geotechnical information about encountered rock conditions and detailed installed support information is required. The limited space and time in an operational tunnel environment make the acquiring data challenging. The laser scanning in a tunneling environment, however, shows a great potential. The surveying and mapping of tunnels are crucial for the optimal use after construction and in routine inspections. Most of these applications focus on the geometric information of the tunnels extracted from the laser scanning data. There are two kinds of applications widely discussed: deformation measurement and feature extraction. The traditional deformation measurement in an underground environment is performed with a series of permanent control points installed around the profile of an excavation, which is unsuitable for a global consideration of the investigated area. Using laser scanning for deformation analysis provides many benefits as compared to traditional monitoring techniques. The change in profile is able to be fully characterized and the areas of the anomalous movement can easily be separated from overall trends due to the high density of the point cloud data. Furthermore, monitoring with a laser scanner does not require the permanent installation of control points, therefore the monitoring can be completed more quickly after excavation, and the scanning is non-contact, hence, no damage is done during the installation of temporary control points. The main drawback of using the laser scanning for deformation monitoring is that the point accuracy of the original data is generally the same magnitude as the smallest level of deformations that are to be measured. To overcome this, statistical techniques and three dimensional image processing techniques for the point clouds must be developed. For safely, effectively and easily control the problem of Over Underbreak detection of road and solve the problemof the roadway data collection difficulties, this paper presents a new method of continuous section extraction and Over Underbreak detection of road based on 3D laser scanning technology and image processing, the method is divided into the following three steps: based on Canny edge detection, local axis fitting, continuous extraction section and Over Underbreak detection of section. First, after Canny edge detection, take the least-squares curve fitting method to achieve partial fitting in axis. Then adjust the attitude of local roadway that makes the axis of the roadway be consistent with the direction of the extraction reference, and extract section along the reference direction. Finally, we compare the actual cross-sectional view and the cross-sectional design to complete Overbreak detected. Experimental results show that the proposed method have a great advantage in computing costs and ensure cross-section orthogonal intercept terms compared with traditional detection methods.
Efficient terrestrial laser scan segmentation exploiting data structure
NASA Astrophysics Data System (ADS)
Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa
2016-09-01
New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.
Analytic Evolution of Singular Distribution Amplitudes in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tandogan Kunkel, Asli
2014-08-01
Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less
Robust curb detection with fusion of 3D-Lidar and camera data.
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-05-21
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.
2010-03-31
presented in the AFRL organized Aeroelastic Workshop in Sedona October 2008, and at the AVT-168 Symposium on Morphing Vehicles, Lisbon, Portugal April 2009...surface geometry. - Conventional deforming grid methods will fail at a point when the geometry change becomes large, no matter how good the method...Numb’ Martian Entry* Knudson number: Kn _ M.a GasKinetic parameter ASU . flttA TKHNOLOGY Overview • Ballute aeroelastic problem requires
2012-09-01
downconverters; telemetry RF preamplifiers; telemetry multicouplers; telemetry receivers 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as...Continuing Engineering Education Program, George Washington University , 1994. A-5 Figure A-2. Graphical representation of intercept point...NFdb) is expressed in decibels and noise factor (nf ) in decimal units. For example, a noise figure of 3 dB corresponds to a noise factor of 2
Davis, Stephen N; Horton, Edward S; Battelino, Tadej; Rubin, Richard R; Schulman, Kevin A; Tamborlane, William V
2010-04-01
Sensor-augmented pump therapy (SAPT) integrates real-time continuous glucose monitoring (RT-CGM) with continuous subcutaneous insulin infusion (CSII) and offers an alternative to multiple daily injections (MDI). Previous studies provide evidence that SAPT may improve clinical outcomes among people with type 1 diabetes. Sensor-Augmented Pump Therapy for A1c Reduction (STAR) 3 is a multicenter randomized controlled trial comparing the efficacy of SAPT to that of MDI in subjects with type 1 diabetes. Subjects were randomized to either continue with MDI or transition to SAPT for 1 year. Subjects in the MDI cohort were allowed to transition to SAPT for 6 months after completion of the study. SAPT subjects who completed the study were also allowed to continue for 6 months. The primary end point was the difference between treatment groups in change in hemoglobin A1c (HbA1c) percentage from baseline to 1 year of treatment. Secondary end points included percentage of subjects with HbA1c < or =7% and without severe hypoglycemia, as well as area under the curve of time spent in normal glycemic ranges. Tertiary end points include percentage of subjects with HbA1c < or =7%, key safety end points, user satisfaction, and responses on standardized assessments. A total of 495 subjects were enrolled, and the baseline characteristics similar between the SAPT and MDI groups. Study completion is anticipated in June 2010. Results of this randomized controlled trial should help establish whether an integrated RT-CGM and CSII system benefits patients with type 1 diabetes more than MDI.
A Call for Methodological Plurality: Reconsidering Research Approaches in Adult Education
ERIC Educational Resources Information Center
Daley, Barbara J.; Martin, Larry G.; Roessger, Kevin M.
2018-01-01
Within this "Adult Education Quarterly" ("AEQ") forum, the authors call for a dialogue and examination of research methods in the field of adult and continuing education. Using the article by Boeren as a starting point, the authors analyze both qualitative and quantitative research trends and advocate for more methodological…
Data retrieval system provides unlimited hardware design information
NASA Technical Reports Server (NTRS)
Rawson, R. D.; Swanson, R. L.
1967-01-01
Data is input to magnetic tape on a single format card that specifies the system, location, and component, the test point identification number, the operators initial, the date, a data code, and the data itself. This method is efficient for large volume data storage and retrieval, and permits output variations without continuous program modifications.
Spreadsheet Modeling of (Q,R) Inventory Policies
ERIC Educational Resources Information Center
Cobb, Barry R.
2013-01-01
This teaching brief describes a method for finding an approximately optimal combination of order quantity and reorder point in a continuous review inventory model using a discrete expected shortage calculation. The technique is an alternative to a model where expected shortage is calculated by integration, and can allow students who have not had a…
A Proposal for Research on Complex Media, Imagining and Uncertainty Quantification
2013-11-26
demonstration that the Green’s function for wave propagation in an ergodic cavity can be recovered exactly by cross correlation of signals at two points...the continuation of a project in which we have developed autofocus methods based on a phase space formulation ( Wigner transform) of the array data and
Improving CME: Using Participant Satisfaction Measures to Specify Educational Methods
ERIC Educational Resources Information Center
Olivieri, Jason J.; Regala, Roderick P.
2013-01-01
Imagine having developed a continuing medical education (CME) initiative to educate physicians on updated guidelines regarding high cholesterol in adults. This initiative consisted of didactic presentations and case-based discussions offered in 5 major US cities, followed by a Web-based enduring component to distill key points of the live…
The effect of Reynolds number and turbulence on airfoil aerodynamics at -90 degrees incidence
NASA Technical Reports Server (NTRS)
Stremel, Paul M.
1993-01-01
A method has been developed for calculating the viscous flow about airfoils in with and without deflected flaps at -90 deg incidence. This method provides for the solution of the unsteady incompressible Navier-Stokes equations by means of an implicit technique. The solution is calculated on a body-fitted computational mesh using a staggered grid method. The vorticity is defined at the node points, and the velocity components are defined at the mesh-cell sides. The staggered-grid orientation provides for accurate representation of vorticity at the node points and the continuity equation at the mesh-cell centers. The method provides for the direct solution of the flow field and satisfies the continuity equation to machine zero at each time-step. The method is evaluated in terms of its ability to predict two-dimensional flow about an airfoil at -90 degrees incidence for varying Reynolds number and different boundary layer models. A laminar and a turbulent boundary layer models. A laminar and a turbulent boundary layer model are considered in the evaluation of the method. The variation of the average loading and surface pressure distribution due to flap deflection, Reynolds number, and laminar or turbulent flow are presented and compared with experimental results. The comparisons indicate that the calculated drag and drag reduction caused by flap deflection and the calculated average surface pressure are in excellent agreement with the measured results at a similar Reynolds number.
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1977-01-01
The problem of mathematically defining a smooth surface, passing through a finite set of given points is studied. Literature relating to the problem is briefly reviewed. An algorithm is described that first constructs a triangular grid in the (x,y) domain, and first partial derivatives at the modal points are estimated. Interpolation in the triangular cells using a method that gives C sup.1 continuity overall is examined. Performance of software implementing the algorithm is discussed. Theoretical results are presented that provide valuable guidance in the development of algorithms for constructing triangular grids.
Distance majorization and its applications.
Chi, Eric C; Zhou, Hua; Lange, Kenneth
2014-08-01
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.
Source imaging of potential fields through a matrix space-domain algorithm
NASA Astrophysics Data System (ADS)
Baniamerian, Jamaledin; Oskooi, Behrooz; Fedi, Maurizio
2017-01-01
Imaging of potential fields yields a fast 3D representation of the source distribution of potential fields. Imaging methods are all based on multiscale methods allowing the source parameters of potential fields to be estimated from a simultaneous analysis of the field at various scales or, in other words, at many altitudes. Accuracy in performing upward continuation and differentiation of the field has therefore a key role for this class of methods. We here describe an accurate method for performing upward continuation and vertical differentiation in the space-domain. We perform a direct discretization of the integral equations for upward continuation and Hilbert transform; from these equations we then define matrix operators performing the transformation, which are symmetric (upward continuation) or anti-symmetric (differentiation), respectively. Thanks to these properties, just the first row of the matrices needs to be computed, so to decrease dramatically the computation cost. Our approach allows a simple procedure, with the advantage of not involving large data extension or tapering, as due instead in case of Fourier domain computation. It also allows level-to-drape upward continuation and a stable differentiation at high frequencies; finally, upward continuation and differentiation kernels may be merged into a single kernel. The accuracy of our approach is shown to be important for multi-scale algorithms, such as the continuous wavelet transform or the DEXP (depth from extreme point method), because border errors, which tend to propagate largely at the largest scales, are radically reduced. The application of our algorithm to synthetic and real-case gravity and magnetic data sets confirms the accuracy of our space domain strategy over FFT algorithms and standard convolution procedures.
Method and system for data clustering for very large databases
NASA Technical Reports Server (NTRS)
Livny, Miron (Inventor); Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor)
1998-01-01
Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.
NASA Astrophysics Data System (ADS)
Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro
It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.
Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.
Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian
2014-01-01
In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).
NASA Technical Reports Server (NTRS)
Mehra, R. K.; Washburn, R. B.; Sajan, S.; Carroll, J. V.
1979-01-01
A hierarchical real time algorithm for optimal three dimensional control of aircraft is described. Systematic methods are developed for real time computation of nonlinear feedback controls by means of singular perturbation theory. The results are applied to a six state, three control variable, point mass model of an F-4 aircraft. Nonlinear feedback laws are presented for computing the optimal control of throttle, bank angle, and angle of attack. Real Time capability is assessed on a TI 9900 microcomputer. The breakdown of the singular perturbation approximation near the terminal point is examined Continuation methods are examined to obtain exact optimal trajectories starting from the singular perturbation solutions.
32 CFR 634.44 - The traffic point system.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 4 2010-07-01 2010-07-01 true The traffic point system. 634.44 Section 634.44 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) LAW ENFORCEMENT AND... § 634.44 The traffic point system. The traffic point system provides a uniform administrative device to...
NASA Technical Reports Server (NTRS)
Rummel, R.
1975-01-01
Integral formulas in the parameter domain are used instead of a representation by spherical harmonics. The neglected regions will cause a truncation error. The application of the discrete form of the integral equations connecting the satellite observations with surface gravity anomalies is discussed in comparison with the least squares prediction method. One critical point of downward continuation is the proper choice of the boundary surface. Practical feasibilities are in conflict with theoretical considerations. The properties of different approaches for this question are analyzed.
Have the temperature time series a structural change after 1998?
NASA Astrophysics Data System (ADS)
Werner, Rolf; Valev, Dimitare; Danov, Dimitar
2012-07-01
The global and hemisphere temperature GISS and Hadcrut3 time series were analysed for structural changes. We postulate the continuity of the preceding temperature function depending from the time. The slopes are calculated for a sequence of segments limited by time thresholds. We used a standard method, the restricted linear regression with dummy variables. We performed the calculations and tests for different number of thresholds. The thresholds are searched continuously in determined time intervals. The F-statistic is used to obtain the time points of the structural changes.
Adaptive model reduction for continuous systems via recursive rational interpolation
NASA Technical Reports Server (NTRS)
Lilly, John H.
1994-01-01
A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.
Method of Estimating Continuous Cooling Transformation Curves of Glasses
NASA Technical Reports Server (NTRS)
Zhu, Dongmei; Zhou, Wancheng; Ray, Chandra S.; Day, Delbert E.
2006-01-01
A method is proposed for estimating the critical cooling rate and continuous cooling transformation (CCT) curve from isothermal TTT data of glasses. The critical cooling rates and CCT curves for a group of lithium disilicate glasses containing different amounts of Pt as nucleating agent estimated through this method are compared with the experimentally measured values. By analysis of the experimental and calculated data of the lithium disilicate glasses, a simple relationship between the crystallized amount in the glasses during continuous cooling, X, and the temperature of undercooling, (Delta)T, was found to be X = AR(sup-4)exp(B (Delta)T), where (Delta)T is the temperature difference between the theoretical melting point of the glass composition and the temperature in discussion, R is the cooling rate, and A and B are constants. The relation between the amount of crystallisation during continuous cooling and isothermal hold can be expressed as (X(sub cT)/X(sub iT) = (4/B)(sup 4) (Delta)T(sup -4), where X(sub cT) stands for the crystallised amount in a glass during continuous cooling for a time t when the temperature comes to T, and X(sub iT) is the crystallised amount during isothermal hold at temperature T for a time t.
Severi, Mirko; Becagli, Silvia; Traversi, Rita; Udisti, Roberto
2015-11-17
Recently, the increasing interest in the understanding of global climatic changes and on natural processes related to climate yielded the development and improvement of new analytical methods for the analysis of environmental samples. The determination of trace chemical species is a useful tool in paleoclimatology, and the techniques for the analysis of ice cores have evolved during the past few years from laborious measurements on discrete samples to continuous techniques allowing higher temporal resolution, higher sensitivity and, above all, higher throughput. Two fast ion chromatographic (FIC) methods are presented. The first method was able to measure Cl(-), NO3(-) and SO4(2-) in a melter-based continuous flow system separating the three analytes in just 1 min. The second method (called Ultra-FIC) was able to perform a single chromatographic analysis in just 30 s and the resulting sampling resolution was 1.0 cm with a typical melting rate of 4.0 cm min(-1). Both methods combine the accuracy, precision, and low detection limits of ion chromatography with the enhanced speed and high depth resolution of continuous melting systems. Both methods have been tested and validated with the analysis of several hundred meters of different ice cores. In particular, the Ultra-FIC method was used to reconstruct the high-resolution SO4(2-) profile of the last 10,000 years for the EDML ice core, allowing the counting of the annual layers, which represents a key point in dating these kind of natural archives.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
Inertial Pointing and Positioning System
NASA Technical Reports Server (NTRS)
Yee, Robert (Inventor); Robbins, Fred (Inventor)
1998-01-01
An inertial pointing and control system and method for pointing to a designated target with known coordinates from a platform to provide accurate position, steering, and command information. The system continuously receives GPS signals and corrects Inertial Navigation System (INS) dead reckoning or drift errors. An INS is mounted directly on a pointing instrument rather than in a remote location on the platform for-monitoring the terrestrial position and instrument attitude. and for pointing the instrument at designated celestial targets or ground based landmarks. As a result. the pointing instrument and die INS move independently in inertial space from the platform since the INS is decoupled from the platform. Another important characteristic of the present system is that selected INS measurements are combined with predefined coordinate transformation equations and control logic algorithms under computer control in order to generate inertial pointing commands to the pointing instrument. More specifically. the computer calculates the desired instrument angles (Phi, Theta. Psi). which are then compared to the Euler angles measured by the instrument- mounted INS. and forms the pointing command error angles as a result of the compared difference.
Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes
Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide
2017-01-01
Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889
General design method for three-dimensional potential flow fields. 1: Theory
NASA Technical Reports Server (NTRS)
Stanitz, J. D.
1980-01-01
A general design method was developed for steady, three dimensional, potential, incompressible or subsonic-compressible flow. In this design method, the flow field, including the shape of its boundary, was determined for arbitrarily specified, continuous distributions of velocity as a function of arc length along the boundary streamlines. The method applied to the design of both internal and external flow fields, including, in both cases, fields with planar symmetry. The analytic problems associated with stagnation points, closure of bodies in external flow fields, and prediction of turning angles in three dimensional ducts were reviewed.
Transient pressure analysis of fractured well in bi-zonal gas reservoirs
NASA Astrophysics Data System (ADS)
Zhao, Yu-Long; Zhang, Lie-Hui; Liu, Yong-hui; Hu, Shu-Yong; Liu, Qi-Guo
2015-05-01
For hydraulic fractured well, how to evaluate the properties of fracture and formation are always tough jobs and it is very complex to use the conventional method to do that, especially for partially penetrating fractured well. Although the source function is a very powerful tool to analyze the transient pressure for complex structure well, the corresponding reports on gas reservoir are rare. In this paper, the continuous point source functions in anisotropic reservoirs are derived on the basis of source function theory, Laplace transform method and Duhamel principle. Application of construction method, the continuous point source functions in bi-zonal gas reservoir with closed upper and lower boundaries are obtained. Sequentially, the physical models and transient pressure solutions are developed for fully and partially penetrating fractured vertical wells in this reservoir. Type curves of dimensionless pseudo-pressure and its derivative as function of dimensionless time are plotted as well by numerical inversion algorithm, and the flow periods and sensitive factors are also analyzed. The source functions and solutions of fractured well have both theoretical and practical application in well test interpretation for such gas reservoirs, especial for the well with stimulated reservoir volume around the well in unconventional gas reservoir by massive hydraulic fracturing which always can be described with the composite model.
Stevens, Andrew W.; Lacy, Jessica R.; Finlayson, David P.; Gelfenbaum, Guy
2008-01-01
Seagrass at two sites in northern Puget Sound, Possession Point and nearby Browns Bay, was mapped using both a single-beam sonar and underwater video camera. The acoustic and underwater video data were compared to evaluate the accuracy of acoustic estimates of seagrass cover. The accuracy of the acoustic method was calculated for three classifications of seagrass observed in underwater video: bare (no seagrass), patchy seagrass, and continuous seagrass. Acoustic and underwater video methods agreed in 92 percent and 74 percent of observations made in bare and continuous areas, respectively. However, in patchy seagrass, the agreement between acoustic and underwater video was poor (43 percent). The poor agreement between the two methods in areas with patchy seagrass is likely because the two instruments were not precisely colocated. The distribution of seagrass at the two sites differed both in overall percent vegetated and in the distribution of percent cover versus depth. On the basis of acoustic data, seagrass inhabited 0.29 km2 (19 percent of total area) at Possession Point and 0.043 km2 (5 percent of total area) at the Browns Bay study site. The depth distribution at the two sites was markedly different. Whereas the majority of seagrass at Possession Point occurred between -0.5 and -1.5 m MLLW, most seagrass at Browns Bay occurred at a greater depth, between -2.25 and -3.5 m MLLW. Further investigation of the anthropogenic and natural factors causing these differences in distribution is needed.
Modeling abundance using hierarchical distance sampling
Royle, Andy; Kery, Marc
2016-01-01
In this chapter, we provide an introduction to classical distance sampling ideas for point and line transect data, and for continuous and binned distance data. We introduce the conditional and the full likelihood, and we discuss Bayesian analysis of these models in BUGS using the idea of data augmentation, which we discussed in Chapter 7. We then extend the basic ideas to the problem of hierarchical distance sampling (HDS), where we have multiple point or transect sample units in space (or possibly in time). The benefit of HDS in practice is that it allows us to directly model spatial variation in population size among these sample units. This is a preeminent concern of most field studies that use distance sampling methods, but it is not a problem that has received much attention in the literature. We show how to analyze HDS models in both the unmarked package and in the BUGS language for point and line transects, and for continuous and binned distance data. We provide a case study of HDS applied to a survey of the island scrub-jay on Santa Cruz Island, California.
Stationkeeping of Lissajous Trajectories in the Earth-Moon System with Applications to ARTEMIS
NASA Technical Reports Server (NTRS)
Folta, D. C.; Pavlak, T. A.; Howell, K. C.; Woodard, M. A.; Woodfork, D. W.
2010-01-01
In the last few decades, several missions have successfully exploited trajectories near the.Sun-Earth L1 and L2 libration points. Recently, the collinear libration points in the Earth-Moon system have emerged as locations with immediate application. Most libration point orbits, in any system, are inherently unstable. and must be controlled. To this end, several stationkeeping strategies are considered for application to ARTEMIS. Two approaches are examined to investigate the stationkeeping problem in this regime and the specific options. available for ARTEMIS given the mission and vehicle constraints. (I) A baseline orbit-targeting approach controls the vehicle to remain near a nominal trajectory; a related global optimum search method searches all possible maneuver angles to determine an optimal angle and magnitude; and (2) an orbit continuation method, with various formulations determines maneuver locations and minimizes costs. Initial results indicate that consistent stationkeeping costs can be achieved with both approaches and the costs are reasonable. These methods are then applied to Lissajous trajectories representing a baseline ARTEMIS libration orbit trajectory.
Feature-based registration of historical aerial images by Area Minimization
NASA Astrophysics Data System (ADS)
Nagarajan, Sudhagar; Schenk, Toni
2016-06-01
The registration of historical images plays a significant role in assessing changes in land topography over time. By comparing historical aerial images with recent data, geometric changes that have taken place over the years can be quantified. However, the lack of ground control information and precise camera parameters has limited scientists' ability to reliably incorporate historical images into change detection studies. Other limitations include the methods of determining identical points between recent and historical images, which has proven to be a cumbersome task due to continuous land cover changes. Our research demonstrates a method of registering historical images using Time Invariant Line (TIL) features. TIL features are different representations of the same line features in multi-temporal data without explicit point-to-point or straight line-to-straight line correspondence. We successfully determined the exterior orientation of historical images by minimizing the area formed between corresponding TIL features in recent and historical images. We then tested the feasibility of the approach with synthetic and real data and analyzed the results. Based on our analysis, this method shows promise for long-term 3D change detection studies.
Study of noise level at Raja Haji Fisabilillah airport in Tanjung Pinang, Riau Islands
NASA Astrophysics Data System (ADS)
Nofriandi, H.; Wijayanti, A.; Fachrul, M. F.
2018-01-01
Raja Haji Fisabilillah International Airport is a central airport located in Kampung Mekarsari, Pinang Kencana District, Tanjung Pinang City, Riau Islands Province. The aims of this study are to determine noise level at the airport and to calculate noise index using WECPNL (Weighted Equivalent Continuous Perceived Noise Level) method. The method using recommendations from the International Civil Aviation Organization (ICAO), the measurement point is based on at a distance of 300 meters parallel to the runway, as well as 1000 meters, 2000 meters, 3000 meters and 4000 meters from the runway end. The results at point 3 was 75.30 dB(A). Based on the noise intensity result, Boeing aircraft 737-500 was considered as the highest in the airport surrounding area, which is 95.24 dB(A) and the lowest was at point 12 with a value of 37,24 dB(A). Mapping contour shows that 3 areas of noise and point 3 with 75,30 dB(A) were considered as second level area and were complied to the standard required.
Supervision in Rural Schools: A Report on Findings and Practices. Bulletin, 1955, No. 11
ERIC Educational Resources Information Center
Franseth, Jane
1955-01-01
The function of supervision is to help schools do their work better. Systematic appraisal of objectives and procedures in supervision is continually pointing the way to more effective methods of accomplishing this purpose. Because educators have become dissatisfied with the outcomes of this kind of supervision, many of them are seeking more…
Grandmothers and Caregiving to Grandchildren: Continuity, Change, and Outcomes over 24 Months
ERIC Educational Resources Information Center
Musil, Carol M.; Gordon, Nahida L.; Warner, Camille B.; Zauszniewski, Jaclene A.; Standing, Theresa; Wykle, May
2011-01-01
Purpose: Transitions in caregiving, such as becoming a primary caregiver to grandchildren or having adult children and grandchildren move in or out, may affect the well-being of the grandmother. Design and Methods: This report describes caregiving patterns at 3 time points over 24 months in a sample of 485 Ohio grandmothers and examines the…
Compressed storage of arterial pressure waveforms by selection of significant points.
de Graaf, P M; van Goudoever, J; Wesseling, K H
1997-09-01
Continuous records of arterial blood pressure can be obtained non-invasively with Finapres, even for periods of 24 hours. Increasingly, storage of such records is done digitally, requiring large disc capacities. It is therefore necessary to find methods to store blood pressure waveforms in compressed form. The method of selection of significant points known from ECG data compression is adapted. Points are selected as significant wherever the first derivative of the pressure wave changes sign. As a second stage recursive partitioning is used to select additional points such that the difference between the selected points, linearly interpolated, and the original curve remains below a maximum. This method is tested on finger arterial pressure waveform epochs of 60 s duration taken from 32 patients with a wide range of blood pressures and heart rates. An average compression factor of 4.6 (SD 1.0) is obtained when accepting a maximum difference of 3 mmHg. The root mean squared error is 1 mmHg averaged over the group of patient waveforms. Clinically relevant parameters such as systolic, diastolic and mean pressure are reproduced with an offset error of less than 0.5 (0.3) mmHg and scatter less than 0.6 (0.1) mmHg. It is concluded that a substantial compression factor can be achieved with a simple and computationally fast algorithm and little deterioration in waveform quality and pressure level accuracy.
Chen, Ying-ping; Chen, Chao-Hong
2010-01-01
An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.
NASA Astrophysics Data System (ADS)
Rueda, Sylvia; Udupa, Jayaram K.
2011-03-01
Landmark based statistical object modeling techniques, such as Active Shape Model (ASM), have proven useful in medical image analysis. Identification of the same homologous set of points in a training set of object shapes is the most crucial step in ASM, which has encountered challenges such as (C1) defining and characterizing landmarks; (C2) ensuring homology; (C3) generalizing to n > 2 dimensions; (C4) achieving practical computations. In this paper, we propose a novel global-to-local strategy that attempts to address C3 and C4 directly and works in Rn. The 2D version starts from two initial corresponding points determined in all training shapes via a method α, and subsequently by subdividing the shapes into connected boundary segments by a line determined by these points. A shape analysis method β is applied on each segment to determine a landmark on the segment. This point introduces more pairs of points, the lines defined by which are used to further subdivide the boundary segments. This recursive boundary subdivision (RBS) process continues simultaneously on all training shapes, maintaining synchrony of the level of recursion, and thereby keeping correspondence among generated points automatically by the correspondence of the homologous shape segments in all training shapes. The process terminates when no subdividing lines are left to be considered that indicate (as per method β) that a point can be selected on the associated segment. Examples of α and β are presented based on (a) distance; (b) Principal Component Analysis (PCA); and (c) the novel concept of virtual landmarks.
Long Term, Operational Monitoring Of Enhanced Oil Recovery In Harsh Environments With INSAR
NASA Astrophysics Data System (ADS)
Sato, S.; Henschel, M. D.
2012-01-01
Since 2004, MDA GSI has provided ground deformation measurements for an oil field in northern Alberta, Canada using InSAR technology. During this period, the monitoring has reliably shown the slow rise of the oil field due to enhanced oil recovery operations. The InSAR monitoring solution is essentially based on the observation of point and point-like targets in the field. Ground conditions in the area are almost continuously changing (in their reflectivity characteristics) making it difficult to ob- serve coherent patterns from the ground. The extended duration of the oil operations has allowed us to continue InSAR monitoring and transition from RADARSAT-1 to RADARSAT-2. With RADARSAT-2 and the enhancement of the satellite resolution capability has provided more targets of opportunity as identified by a differential coherence method. This poster provides an overview of the long term monitoring of the oil field in northern Alberta, Canada.
Boundary control of elliptic solutions to enforce local constraints
NASA Astrophysics Data System (ADS)
Bal, G.; Courdurier, M.
We present a constructive method to devise boundary conditions for solutions of second-order elliptic equations so that these solutions satisfy specific qualitative properties such as: (i) the norm of the gradient of one solution is bounded from below by a positive constant in the vicinity of a finite number of prescribed points; (ii) the determinant of gradients of n solutions is bounded from below in the vicinity of a finite number of prescribed points. Such constructions find applications in recent hybrid medical imaging modalities. The methodology is based on starting from a controlled setting in which the constraints are satisfied and continuously modifying the coefficients in the second-order elliptic equation. The boundary condition is evolved by solving an ordinary differential equation (ODE) defined via appropriate optimality conditions. Unique continuations and standard regularity results for elliptic equations are used to show that the ODE admits a solution for sufficiently long times.
Radar studies of the planets. [radar measurements of lunar surface, Mars, Mercury, and Venus
NASA Technical Reports Server (NTRS)
Ingalls, R. P.; Pettengill, G. H.; Rogers, A. E. E.; Sebring, P. B. (Editor); Shapiro, I. I.
1974-01-01
The radar measurements phase of the lunar studies involving reflectivity and topographic mapping of the visible lunar surface was ended in December 1972, but studies of the data and production of maps have continued. This work was supported by Manned Spacecraft Center, Houston. Topographic mapping of the equatorial regions of Mars has been carried out during the period of each opposition since that of 1967. The method comprised extended precise traveling time measurements to a small area centered on the subradar point. As measurements continued, planetary motions caused this point to sweep out extensive areas in both latitude and longitude permitting the development of a fairly extensive topographical map in the equatorial region. Radar observations of Mercury and Venus have also been made over the past few years. Refinements of planetary motions, reflectivity maps and determinations of rotation rates have resulted.
Dune advance into a coastal forest, equatorial Brazil: A subsurface perspective
NASA Astrophysics Data System (ADS)
Buynevich, Ilya V.; Filho, Pedro Walfir M. Souza; Asp, Nils E.
2010-06-01
A large active parabolic dune along the coast of Pará State, northern Brazil, was analyzed using aerial photography and imaged with high-resolution ground-penetrating radar (GPR) to map the subsurface facies architecture and point-source anomalies. Most high-amplitude (8-10 dB) subsurface anomalies are correlated with partially buried mangrove trees along the leading edge (slipface) of the advancing dune. Profiles along a 200-m long basal stoss side of the dune reveal 66 targets, most of which lie below the water table and are thus inaccessible by other methods. Signal amplitudes of point-source anomalies are substantially higher than those associated with the reflections from continuous subsurface features (water table, sedimentary layers). When complemented with exposures and excavations, GPR provides the best means of rapid continuous imaging of the geological record of complex interactions between vegetation and aeolian deposition.
NASA Astrophysics Data System (ADS)
Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia
2018-05-01
Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.
Distance majorization and its applications
Chi, Eric C.; Zhou, Hua; Lange, Kenneth
2014-01-01
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563
High-Performance Polyimide Powder Coatings
NASA Technical Reports Server (NTRS)
2008-01-01
Much of the infrastructure at Kennedy Space Center and other NASA sites has been subjected to outside weathering effects for more than 40 years. Because much of this infrastructure has metallic surfaces, considerable effort is continually devoted to developing methods to minimize the effects of corrosion on these surfaces. These efforts are especially intense at KSC, where offshore salt spray and exhaust from Solid Rocket Boosters accelerate corrosion. Coatings of various types have traditionally been the choice for minimizing corrosion, and improved corrosion control methods are constantly being researched. Recent work at KSC on developing an improved method for repairing Kapton (polyimide)-based electrical wire insulation has identified polyimides with much lower melting points than traditional polyimides used for insulation. These lower melting points and the many other outstanding physical properties of polyimides (thermal stability, chemical resistance, and electrical properties) led us to investigate whether they could be used in powder coatings.
Material point method modeling in oil and gas reservoirs
Vanderheyden, William Brian; Zhang, Duan
2016-06-28
A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.
The Relation of Finite Element and Finite Difference Methods
NASA Technical Reports Server (NTRS)
Vinokur, M.
1976-01-01
Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.
End points and assessments in esthetic dental treatment.
Ishida, Yuichi; Fujimoto, Keiko; Higaki, Nobuaki; Goto, Takaharu; Ichikawa, Tetsuo
2015-10-01
There are two key considerations for successful esthetic dental treatments. This article systematically describes the two key considerations: the end points of esthetic dental treatments and assessments of esthetic outcomes, which are also important for acquiring clinical skill in esthetic dental treatments. The end point and assessment of esthetic dental treatment were discussed through literature reviews and clinical practices. Before designing a treatment plan, the end point of dental treatment should be established. The section entitled "End point of esthetic dental treatment" discusses treatments for maxillary anterior teeth and the restoration of facial profile with prostheses. The process of assessing treatment outcomes entitled "Assessments of esthetic dental treatment" discusses objective and subjective evaluation methods. Practitioners should reach an agreement regarding desired end points with patients through medical interviews, and continuing improvements and developments of esthetic assessments are required to raise the therapeutic level of esthetic dental treatments. Copyright © 2015 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
40 CFR 142.57 - Bottled water, point-of-use, and point-of-entry devices.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Bottled water, point-of-use, and point-of-entry devices. 142.57 Section 142.57 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Exemptions Issued by the Administrator § 142.57...
Froud, Robert; Abel, Gary
2014-01-01
Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472
Robust Curb Detection with Fusion of 3D-Lidar and Camera Data
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-01-01
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364
Bounded parametric control of plane motions of space tethered system
NASA Astrophysics Data System (ADS)
Bezglasnyi, S. P.; Mukhametzyanova, A. A.
2018-05-01
This paper is focused on the problem of control of plane motions of a space tethered system (STS). The STS is modeled as a heavy rod with two point masses. Point masses are fixed on the rod. A third point mass can move along the rod. The control is realized as a continuous change of the distance from the centre of mass of the tethered system to the movable mass. New limited control laws processes of excitation and damping are built. Diametric reorientation and gravitational stabilization to the local vertical of an STS were obtained. The problem is solved by the method of Lyapunov's functions of the classical theory of stability. The theoretical results are confirmed by numerical calculations.
Bonetti, Marco; Pagano, Marcello
2005-03-15
The topic of this paper is the distribution of the distance between two points distributed independently in space. We illustrate the use of this interpoint distance distribution to describe the characteristics of a set of points within some fixed region. The properties of its sample version, and thus the inference about this function, are discussed both in the discrete and in the continuous setting. We illustrate its use in the detection of spatial clustering by application to a well-known leukaemia data set, and report on the results of a simulation experiment designed to study the power characteristics of the methods within that study region and in an artificial homogenous setting. Copyright (c) 2004 John Wiley & Sons, Ltd.
Geopotential Field Anomaly Continuation with Multi-Altitude Observations
NASA Technical Reports Server (NTRS)
Kim, Jeong Woo; Kim, Hyung Rae; von Frese, Ralph; Taylor, Patrick; Rangelova, Elena
2012-01-01
Conventional gravity and magnetic anomaly continuation invokes the standard Poisson boundary condition of a zero anomaly at an infinite vertical distance from the observation surface. This simple continuation is limited, however, where multiple altitude slices of the anomaly field have been observed. Increasingly, areas are becoming available constrained by multiple boundary conditions from surface, airborne, and satellite surveys. This paper describes the implementation of continuation with multi-altitude boundary conditions in Cartesian and spherical coordinates and investigates the advantages and limitations of these applications. Continuations by EPS (Equivalent Point Source) inversion and the FT (Fourier Transform), as well as by SCHA (Spherical Cap Harmonic Analysis) are considered. These methods were selected because they are especially well suited for analyzing multi-altitude data over finite patches of the earth such as covered by the ADMAP database. In general, continuations constrained by multi-altitude data surfaces are invariably superior to those constrained by a single altitude data surface due to anomaly measurement errors and the non-uniqueness of continuation.
Geopotential Field Anomaly Continuation with Multi-Altitude Observations
NASA Technical Reports Server (NTRS)
Kim, Jeong Woo; Kim, Hyung Rae; vonFrese, Ralph; Taylor, Patrick; Rangelova, Elena
2011-01-01
Conventional gravity and magnetic anomaly continuation invokes the standard Poisson boundary condition of a zero anomaly at an infinite vertical distance from the observation surface. This simple continuation is limited, however, where multiple altitude slices of the anomaly field have been observed. Increasingly, areas are becoming available constrained by multiple boundary conditions from surface, airborne, and satellite surveys. This paper describes the implementation of continuation with multi-altitude boundary conditions in Cartesian and spherical coordinates and investigates the advantages and limitations of these applications. Continuations by EPS (Equivalent Point Source) inversion and the FT (Fourier Transform), as well as by SCHA (Spherical Cap Harmonic Analysis) are considered. These methods were selected because they are especially well suited for analyzing multi-altitude data over finite patches of the earth such as covered by the ADMAP database. In general, continuations constrained by multi-altitude data surfaces are invariably superior to those constrained by a single altitude data surface due to anomaly measurement errors and the non-uniqueness of continuation.
Monitoring crack extension in fracture toughness tests by ultrasonics
NASA Technical Reports Server (NTRS)
Klima, S. J.; Fisher, D. M.; Buzzard, R. J.
1975-01-01
An ultrasonic method was used to observe the onset of crack extension and to monitor continued crack growth in fracture toughness specimens during three point bend tests. A 20 MHz transducer was used with commercially available equipment to detect average crack extension less than 0.09 mm. The material tested was a 300-grade maraging steel in the annealed condition. A crack extension resistance curve was developed to demonstrate the usefulness of the ultrasonic method for minimizing the number of tests required to generate such curves.
Sealed glass coating of high temperature ceramic superconductors
Wu, W.; Chu, C.Y.; Goretta, K.C.; Routbort, J.L.
1995-05-02
A method and article of manufacture of a lead oxide based glass coating on a high temperature superconductor is disclosed. The method includes preparing a dispersion of glass powders in a solution, applying the dispersion to the superconductor, drying the dispersion before applying another coating and heating the glass powder dispersion at temperatures below oxygen diffusion onset and above the glass melting point to form a continuous glass coating on the superconductor to establish compressive stresses which enhance the fracture strength of the superconductor. 8 figs.
Sealed glass coating of high temperature ceramic superconductors
Wu, Weite; Chu, Cha Y.; Goretta, Kenneth C.; Routbort, Jules L.
1995-01-01
A method and article of manufacture of a lead oxide based glass coating on a high temperature superconductor. The method includes preparing a dispersion of glass powders in a solution, applying the dispersion to the superconductor, drying the dispersion before applying another coating and heating the glass powder dispersion at temperatures below oxygen diffusion onset and above the glass melting point to form a continuous glass coating on the superconductor to establish compressive stresses which enhance the fracture strength of the superconductor.
Level repulsion and band sorting in phononic crystals
NASA Astrophysics Data System (ADS)
Lu, Yan; Srivastava, Ankit
2018-02-01
In this paper we consider the problem of avoided crossings (level repulsion) in phononic crystals and suggest a computationally efficient strategy to distinguish them from normal cross points. This process is essential for the correct sorting of the phononic bands and, subsequently, for the accurate determination of mode continuation, group velocities, and emergent properties which depend on them such as thermal conductivity. Through explicit phononic calculations using generalized Rayleigh quotient, we identify exact locations of exceptional points in the complex wavenumber domain which results in level repulsion in the real domain. We show that in the vicinity of the exceptional point the relevant phononic eigenvalue surfaces resemble the surfaces of a 2 by 2 parameter-dependent matrix. Along a closed loop encircling the exceptional point we show that the phononic eigenvalues are exchanged, just as they are for the 2 by 2 matrix case. However, the behavior of the associated eigenvectors is shown to be more complex in the phononic case. Along a closed loop around an exceptional point, we show that the eigenvectors can flip signs multiple times unlike a 2 by 2 matrix where the flip of sign occurs only once. Finally, we exploit these eigenvector sign flips around exceptional points to propose a simple and efficient method of distinguishing them from normal crosses and of correctly sorting the band-structure. Our proposed method is roughly an order-of-magnitude faster than the zoom-in method and correctly identifies > 96% of the cases considered. Both its speed and accuracy can be further improved and we suggest some ways of achieving this. Our method is general and, as such, would be directly applicable to other eigenvalue problems where the eigenspectrum needs to be correctly sorted.
NASA Astrophysics Data System (ADS)
Chen, Jing; Zhu, Qing; Huang, Di; Zheng, Shaobo; Zhang, Jieyu; Li, Huigai
2017-09-01
There is a significant difference in the demand for molten steel quality between thin-strip continuous casting and traditional continuous casting. In order to make sure the better surface quality of the thin strips, to generate an oxidation film on the surface of cooling roller is required. This will require that the higher oxygen potential in molten steel and inclusions with low melting point. In this article, the possibility of producing low-melting inclusions which is mainly consisted of SiO2 and MnO is studied by controlling the initial oxygen potential and addition order of deoxidizing alloys. The interaction activity between each component in the ternary system of Al2O3-SiO2-MnO is obtained by Action Concentration model. The equal [Mn], [Si], [O], [Al] curve under the temperature of 1823K and equilibrium condition in ternary system of Al2O3-SiO2-MnO is obtained by relative thermodynamic calculation as well. The control method for getting the low-melting point inclusion is as below. While the weight percentage of Si is 0.35% and the one of Mn is 0.90%, in order to maintain the melting point of inclusion around 1200°C, the free oxygen potential in melted steel F[O] should be maintained between 0.002% ∼ 0.004%. On the contrary, the requirement for acid dissolved [Al] content in melted steel is as low as 0.0001% ∼ 0.0005%.
LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources
NASA Astrophysics Data System (ADS)
Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin
2017-12-01
Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level.
NASA Astrophysics Data System (ADS)
Li, Jiqing; Duan, Zhipeng; Huang, Jing
2018-06-01
With the aggravation of the global climate change, the shortage of water resources in China is becoming more and more serious. Using reasonable methods to study changes in precipitation is very important for planning and management of water resources. Based on the time series of precipitation in Beijing from 1951 to 2015, the multi-scale features of precipitation are analyzed by the Extreme-point Symmetric Mode Decomposition (ESMD) method to forecast the precipitation shift. The results show that the precipitation series have periodic changes of 2.6, 4.3, 14 and 21.7 years, and the variance contribution rate of each modal component shows that the inter-annual variation dominates the precipitation in Beijing. It is predicted that precipitation in Beijing will continue to decrease in the near future.
An RBF-based reparameterization method for constrained texture mapping.
Yu, Hongchuan; Lee, Tong-Yee; Yeh, I-Cheng; Yang, Xiaosong; Li, Wenxi; Zhang, Jian J
2012-07-01
Texture mapping has long been used in computer graphics to enhance the realism of virtual scenes. However, to match the 3D model feature points with the corresponding pixels in a texture image, surface parameterization must satisfy specific positional constraints. However, despite numerous research efforts, the construction of a mathematically robust, foldover-free parameterization that is subject to positional constraints continues to be a challenge. In the present paper, this foldover problem is addressed by developing radial basis function (RBF)-based reparameterization. Given initial 2D embedding of a 3D surface, the proposed method can reparameterize 2D embedding into a foldover-free 2D mesh, satisfying a set of user-specified constraint points. In addition, this approach is mesh free. Therefore, generating smooth texture mapping results is possible without extra smoothing optimization.
NASA Astrophysics Data System (ADS)
Lenoir, Guillaume; Crucifix, Michel
2018-03-01
We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion article (Lenoir and Crucifix, 2018). All the methods presented in this paper are available to the reader in the Python package WAVEPAL.
An approach for spherical harmonic analysis of non-smooth data
NASA Astrophysics Data System (ADS)
Wang, Hansheng; Wu, Patrick; Wang, Zhiyong
2006-12-01
A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kercel, S.W.
1999-11-07
For several reasons, Bayesian parameter estimation is superior to other methods for inductively learning a model for an anticipatory system. Since it exploits prior knowledge, the analysis begins from a more advantageous starting point than other methods. Also, since "nuisance parameters" can be removed from the Bayesian analysis, the description of the model need not be as complete as is necessary for such methods as matched filtering. In the limit of perfectly random noise and a perfect description of the model, the signal-to-noise ratio improves as the square root of the number of samples in the data. Even with themore » imperfections of real-world data, Bayesian methods approach this ideal limit of performance more closely than other methods. These capabilities provide a strategy for addressing a major unsolved problem in pump operation: the identification of precursors of cavitation. Cavitation causes immediate degradation of pump performance and ultimate destruction of the pump. However, the most efficient point to operate a pump is just below the threshold of cavitation. It might be hoped that a straightforward method to minimize pump cavitation damage would be to simply adjust the operating point until the inception of cavitation is detected and then to slightly readjust the operating point to let the cavitation vanish. However, due to the continuously evolving state of the fluid moving through the pump, the threshold of cavitation tends to wander. What is needed is to anticipate cavitation, and this requires the detection and identification of precursor features that occur just before cavitation starts.« less
Small optical inter-satellite communication system for small and micro satellites
NASA Astrophysics Data System (ADS)
Iwamoto, Kyohei; Nakao, Takashi; Ito, Taiji; Sano, Takeshi; Ishii, Tamotsu; Shibata, Keiichi; Ueno, Mitsuhiro; Ohta, Shinji; Komatsu, Hiromitsu; Araki, Tomohiro; Kobayashi, Yuta; Sawada, Hirotaka
2017-02-01
Small optical inter-satellite communication system to be installed into small and micro satellites flying on LEO are designed and experimentally verified of its fundamental functions. Small, light weighted, power efficient as well as usable data transmission rate optical inter-satellite communication system is one of promising approach to provide realtime data handling and operation capabilities for micro and small satellite constellations which have limited conditions of payload. Proposed system is designed to connect satellites with 4500 (km) long maximum to be able to talk with ground station continuously by relaying LEO satellites even when they are in their own maneuvers. Connecting satellites with 4500 (km) long with keeping steady data rate, accurate pointing and tracking method will be one of a crucial issue. In this paper, we propose a precious pointing and tracking method and system with a miniature optics and experimentally verified almost 10 (μrad) of pointing accuracy with more than 500 (mrad) of angular coverage.
A Method of Efficient Inclination Changes for Low-thrust Spacecraft
NASA Technical Reports Server (NTRS)
Falck, Robert; Gefert, Leon
2002-01-01
The evolution of low-thrust propulsion technologies has reached a point where such systems have become an economical option for many space missions. The development of efficient, low trip time control laws has received an increasing amount of attention in recent years, though few studies have examined the subject of inclination changing maneuvers in detail. A method for performing economical inclination changes through the use of an efficiency factor is derived front Lagrange's planetary equations. The efficiency factor can be used to regulate propellant expenditure at the expense of trip time. Such a method can be used for discontinuous-thrust transfers that offer reduced propellant masses and trip-times in comparison to continuous thrust transfers, while utilizing thrusters that operate at a lower specific impulse. Performance comparisons of transfers utilizing this approach with continuous-thrust transfers are generated through trajectory simulation and are presented in this paper.
Very accurate upward continuation to low heights in a test of non-Newtonian theory
NASA Technical Reports Server (NTRS)
Romaides, Anestis J.; Jekeli, Christopher
1989-01-01
Recently, gravity measurements were made on a tall, very stable television transmitting tower in order to detect a non-Newtonian gravitational force. This experiment required the upward continuation of gravity from the Earth's surface to points as high as only 600 m above ground. The upward continuation was based on a set of gravity anomalies in the vicinity of the tower whose data distribution exhibits essential circular symmetry and appropriate radial attenuation. Two methods were applied to perform the upward continuation - least-squares solution of a local harmonic expansion and least-squares collocation. Both methods yield comparable results, and have estimated accuracies on the order of 50 microGal or better (1 microGal = 10(exp -8) m/sq s). This order of accuracy is commensurate with the tower gravity measurments (which have an estimated accuracy of 20 microGal), and enabled a definitive detection of non-Newtonian gravity. As expected, such precise upward continuations require very dense data near the tower. Less expected was the requirement of data (though sparse) up to 220 km away from the tower (in the case that only an ellipsoidal reference gravity is applied).
47 CFR 90.429 - Control point and dispatch point requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... 90.429 Section 90.429 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND...-operated device which will provide continuous visual indication when the transmitter is radiating, or, a pilot lamp or meter which will provide continuous visual indication when the transmitter circuits have...
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera
Qu, Yufu; Huang, Jianyu; Zhang, Xuan
2018-01-01
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.
Qu, Yufu; Huang, Jianyu; Zhang, Xuan
2018-01-14
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.
A new approach to assess COPD by identifying lung function break-points
Eriksson, Göran; Jarenbäck, Linnea; Peterson, Stefan; Ankerst, Jaro; Bjermer, Leif; Tufvesson, Ellen
2015-01-01
Purpose COPD is a progressive disease, which can take different routes, leading to great heterogeneity. The aim of the post-hoc analysis reported here was to perform continuous analyses of advanced lung function measurements, using linear and nonlinear regressions. Patients and methods Fifty-one COPD patients with mild to very severe disease (Global Initiative for Chronic Obstructive Lung Disease [GOLD] Stages I–IV) and 41 healthy smokers were investigated post-bronchodilation by flow-volume spirometry, body plethysmography, diffusion capacity testing, and impulse oscillometry. The relationship between COPD severity, based on forced expiratory volume in 1 second (FEV1), and different lung function parameters was analyzed by flexible nonparametric method, linear regression, and segmented linear regression with break-points. Results Most lung function parameters were nonlinear in relation to spirometric severity. Parameters related to volume (residual volume, functional residual capacity, total lung capacity, diffusion capacity [diffusion capacity of the lung for carbon monoxide], diffusion capacity of the lung for carbon monoxide/alveolar volume) and reactance (reactance area and reactance at 5Hz) were segmented with break-points at 60%–70% of FEV1. FEV1/forced vital capacity (FVC) and resonance frequency had break-points around 80% of FEV1, while many resistance parameters had break-points below 40%. The slopes in percent predicted differed; resistance at 5 Hz minus resistance at 20 Hz had a linear slope change of −5.3 per unit FEV1, while residual volume had no slope change above and −3.3 change per unit FEV1 below its break-point of 61%. Conclusion Continuous analyses of different lung function parameters over the spirometric COPD severity range gave valuable information additional to categorical analyses. Parameters related to volume, diffusion capacity, and reactance showed break-points around 65% of FEV1, indicating that air trapping starts to dominate in moderate COPD (FEV1 =50%–80%). This may have an impact on the patient’s management plan and selection of patients and/or outcomes in clinical research. PMID:26508849
ERIC Educational Resources Information Center
Steinhausen, Hans-Christoph; Metzke, Christa Winkler
2007-01-01
Background: The goal of this study was to assess the course of functional-somatic symptoms from late childhood to young adulthood and the associations of these symptoms with young adult psychopathology. Methods: Data were collected in a large community sample at three different points in time (1994, 1997, and 2001). Functional-somatic symptoms…
IR spectroscopic studies in microchannel structures
NASA Astrophysics Data System (ADS)
Guber, A. E.; Bier, W.
1998-06-01
By means of the various microengineering methods available, microreaction systems can be produced among others. These microreactors consist of microchannels, where chemical reactions take place under defined conditions. For optimum process control, continuous online analytics is envisaged in the microchannels. For this purpose, a special analytical module has been developed. It may be applied for IR spectroscopic studies at any point of the microchannel.
Amini, A A; Chen, Y; Curwen, R W; Mani, V; Sun, J
1998-06-01
Magnetic resonance imaging (MRI) is unique in its ability to noninvasively and selectively alter tissue magnetization and create tagged patterns within a deforming body such as the heart muscle. The resulting patterns define a time-varying curvilinear coordinate system on the tissue, which we track with coupled B-snake grids. B-spline bases provide local control of shape, compact representation, and parametric continuity. Efficient spline warps are proposed which warp an area in the plane such that two embedded snake grids obtained from two tagged frames are brought into registration, interpolating a dense displacement vector field. The reconstructed vector field adheres to the known displacement information at the intersections, forces corresponding snakes to be warped into one another, and for all other points in the plane, where no information is available, a C1 continuous vector field is interpolated. The implementation proposed in this paper improves on our previous variational-based implementation and generalizes warp methods to include biologically relevant contiguous open curves, in addition to standard landmark points. The methods are validated with a cardiac motion simulator, in addition to in-vivo tagging data sets.
Testing deformation hypotheses by constraints on a time series of geodetic observations
NASA Astrophysics Data System (ADS)
Velsink, Hiddo
2018-01-01
In geodetic deformation analysis observations are used to identify form and size changes of a geodetic network, representing objects on the earth's surface. The network points are monitored, often continuously, because of suspected deformations. A deformation may affect many points during many epochs. The problem is that the best description of the deformation is, in general, unknown. To find it, different hypothesised deformation models have to be tested systematically for agreement with the observations. The tests have to be capable of stating with a certain probability the size of detectable deformations, and to be datum invariant. A statistical criterion is needed to find the best deformation model. Existing methods do not fulfil these requirements. Here we propose a method that formulates the different hypotheses as sets of constraints on the parameters of a least-squares adjustment model. The constraints can relate to subsets of epochs and to subsets of points, thus combining time series analysis and congruence model analysis. The constraints are formulated as nonstochastic observations in an adjustment model of observation equations. This gives an easy way to test the constraints and to get a quality description. The proposed method aims at providing a good discriminating method to find the best description of a deformation. The method is expected to improve the quality of geodetic deformation analysis. We demonstrate the method with an elaborate example.
Circadian rhythm of energy expenditure and oxygen consumption.
Leuck, Marlene; Levandovski, Rosa; Harb, Ana; Quiles, Caroline; Hidalgo, Maria Paz
2014-02-01
This study aimed to evaluate the effect of continuous and intermittent methods of enteral nutrition (EN) administration on circadian rhythm. Thirty-four individuals, aged between 52 and 80 years, were fed through a nasoenteric tube. Fifteen individuals received a continuous infusion for 24 hours/d, and 19 received an intermittent infusion in comparable quantities, every 4 hours from 8:00 to 20:00. In each patient, 4 indirect calorimetric measurements were carried out over 24 hours (A: 7:30, B: 10:30, C: 14:30, and D: 21:30) for 3 days. Energy expenditure and oxygen consumption were significantly higher in the intermittent group than in the continuous group (1782 ± 862 vs 1478 ± 817 kcal/24 hours, P = .05; 257 125 vs 212 117 ml/min, P = .048, respectively). The intermittent group had higher levels of energy expenditure and oxygen consumption at all the measured time points compared with the continuous group. energy expenditure and oxygen consumption in both groups were significantly different throughout the day for 3 days. There is circadian rhythm variation of energy expenditure and oxygen consumption with continuous and intermittent infusion for EN. This suggests that only one indirect daily calorimetric measurement is not able to show the patient's true needs. Energy expenditure is higher at night with both food administration methods. Moreover, energy expenditure and oxygen consumption are higher with the intermittent administration method at all times.
An Inviscid Decoupled Method for the Roe FDS Scheme in the Reacting Gas Path of FUN3D
NASA Technical Reports Server (NTRS)
Thompson, Kyle B.; Gnoffo, Peter A.
2016-01-01
An approach is described to decouple the species continuity equations from the mixture continuity, momentum, and total energy equations for the Roe flux difference splitting scheme. This decoupling simplifies the implicit system, so that the flow solver can be made significantly more efficient, with very little penalty on overall scheme robustness. Most importantly, the computational cost of the point implicit relaxation is shown to scale linearly with the number of species for the decoupled system, whereas the fully coupled approach scales quadratically. Also, the decoupled method significantly reduces the cost in wall time and memory in comparison to the fully coupled approach. This work lays the foundation for development of an efficient adjoint solution procedure for high speed reacting flow.
Loose fusion based on SLAM and IMU for indoor environment
NASA Astrophysics Data System (ADS)
Zhu, Haijiang; Wang, Zhicheng; Zhou, Jinglin; Wang, Xuejing
2018-04-01
The simultaneous localization and mapping (SLAM) method based on the RGB-D sensor is widely researched in recent years. However, the accuracy of the RGB-D SLAM relies heavily on correspondence feature points, and the position would be lost in case of scenes with sparse textures. Therefore, plenty of fusion methods using the RGB-D information and inertial measurement unit (IMU) data have investigated to improve the accuracy of SLAM system. However, these fusion methods usually do not take into account the size of matched feature points. The pose estimation calculated by RGB-D information may not be accurate while the number of correct matches is too few. Thus, considering the impact of matches in SLAM system and the problem of missing position in scenes with few textures, a loose fusion method combining RGB-D with IMU is proposed in this paper. In the proposed method, we design a loose fusion strategy based on the RGB-D camera information and IMU data, which is to utilize the IMU data for position estimation when the corresponding point matches are quite few. While there are a lot of matches, the RGB-D information is still used to estimate position. The final pose would be optimized by General Graph Optimization (g2o) framework to reduce error. The experimental results show that the proposed method is better than the RGB-D camera's method. And this method can continue working stably for indoor environment with sparse textures in the SLAM system.
Quantum simulation of quantum field theory using continuous variables
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George; ...
2015-12-14
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Quantum simulation of quantum field theory using continuous variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Up-and-coming IMCs. [Intermetallic-Matrix Composites
NASA Technical Reports Server (NTRS)
Bowman, Randy; Noebe, Ronald
1989-01-01
While the good oxidation and environmental resistance, high melting points, and comparatively low densities of such ordered intermetallics as Ti3Al, NiAl, FeAl, and NbAl3 render them good candidates for advanced aerospace structures, their poor toughness at low temperatures and low strength at elevated temperatures have prompted the development of fiber-reinforced intermetallic-matrix composites (IMCs) with more balanced characteristics. Fabrication methods for continuous-fiber IMCs under development include the P/M 'powder cloth' method, the foil/fiber method, and thermal spraying. The ultimate success of IMCs depends on fibers truly compatible with the matrix materials.
Kinematic Methods of Designing Free Form Shells
NASA Astrophysics Data System (ADS)
Korotkiy, V. A.; Khmarova, L. I.
2017-11-01
The geometrical shell model is formed in light of the set requirements expressed through surface parameters. The shell is modelled using the kinematic method according to which the shell is formed as a continuous one-parameter set of curves. The authors offer a kinematic method based on the use of second-order curves with a variable eccentricity as a form-making element. Additional guiding ruled surfaces are used to control the designed surface form. The authors made a software application enabling to plot a second-order curve specified by a random set of five coplanar points and tangents.
Tang, Tao; Zhang, Jun; Cao, Dongxiao; Shuai, Shijin; Zhao, Yanguang
2014-12-01
This study investigated the filtration and continuous regeneration of a particulate filter system on an engine test bench, consisting of a diesel oxidation catalyst (DOC) and a catalyzed diesel particulate filter (CDPF). Both the DOC and the CDPF led to a high conversion of NO to NO2 for continuous regeneration. The filtration efficiency on solid particle number (SPN) was close to 100%. The post-CDPF particles were mainly in accumulation mode. The downstream SPN was sensitively influenced by the variation of the soot loading. This phenomenon provides a method for determining the balance point temperature by measuring the trend of SPN concentration. Copyright © 2014. Published by Elsevier B.V.
Antfolk, Maria; Laurell, Thomas
2017-05-01
Rare cells in blood, such as circulating tumor cells or fetal cells in the maternal circulation, posses a great prognostic or diagnostic value, or for the development of personalized medicine, where the study of rare cells could provide information to more specifically targeted treatments. When conventional cell separation methods, such as flow cytometry or magnetic activated cell sorting, have fallen short other methods are desperately sought for. Microfluidics have been extensively used towards isolating and processing rare cells as it offers possibilities not present in the conventional systems. Furthermore, microfluidic methods offer new possibilities for cell separation as they often rely on non-traditional biomarkers and intrinsic cell properties. This offers the possibility to isolate cell populations that would otherwise not be targeted using conventional methods. Here, we provide an extensive review of the latest advances in continuous flow microfluidic rare cell separation and processing with each cell's specific characteristics and separation challenges as a point of view. Copyright © 2017 Elsevier B.V. All rights reserved.
Özcan, E; Eldeniz, A U; Arı, H
2011-12-01
To evaluate the ability of two root canal sealers (Epoxy resin-based AH Plus or polydimethylsiloxane-based GuttaFlow) and five root filling techniques (continuous wave of condensation, Thermafil, lateral condensation, matched taper single gutta-percha point, laterally condensed-matched taper gutta-percha point) to kill bacteria in experimentally infected dentinal tubules. An infected dentine block model was used. One hundred and twenty extracted, single-rooted human teeth were randomly divided into 10 test (n = 10) and 2 control (n = 10) groups. The roots, except negative controls, were infected with Enterococcus faecalis for 21 days. The root canals were then filled using the test materials and methods. Positive controls were not filled. Sterile roots were used as negative controls. Dentine powder was obtained from all root canals using gates glidden drills using a standard method. The dentine powder was diluted and inoculated into bacterial growth media. Total colony-forming units (CFU) were calculated for each sample. Statistical analysis was performed using the Kruskal-Wallis and Mann-Whitney U test. The epoxy resin-based sealer was effective in killing E. faecalis except when using Thermafil (P < 0.05), but the polydimethylsiloxane-based sealer was not effective in killing this microorganism except in the continuous wave group (P < 0.05). In the test model, AH Plus killed bacteria in infected dentine more effectively than GuttaFlow. The filling method was less important than the sealer material. © 2011 International Endodontic Journal.
Deception studies manipulating centrally acting performance modifiers: a review.
Williams, Emily L; Jones, Hollie S; Sparks, Sandy; Marchant, David C; Micklewright, Dominic; McNaughton, Lars R
2014-07-01
Athletes anticipatorily set and continuously adjust pacing strategies before and during events to produce optimal performance. Self-regulation ensures maximal effort is exerted in correspondence with the end point of exercise, while preventing physiological changes that are detrimental and disruptive to homeostatic control. The integration of feedforward and feedback information, together with the proposed brain's performance modifiers is said to be fundamental to this anticipatory and continuous regulation of exercise. The manipulation of central, regulatory internal and external stimuli has been a key focus within deception research, attempting to influence the self-regulation of exercise and induce improvements in performance. Methods of manipulating performance modifiers such as unknown task end point, deceived duration or intensity feedback, self-belief, or previous experience create a challenge within research, as although they contextualize theoretical propositions, there are few ecological and practical approaches which integrate theory with practice. In addition, the different methods and measures demonstrated in manipulation studies have produced inconsistent results. This review examines and critically evaluates the current methods of how specific centrally controlled performance modifiers have been manipulated, within previous deception studies. From the 31 studies reviewed, 10 reported positive effects on performance, encouraging future investigations to explore the mechanisms responsible for influencing pacing and consequently how deceptive approaches can further facilitate performance. The review acts to discuss the use of expectation manipulation not only to examine which methods of deception are successful in facilitating performance but also to understand further the key components used in the regulation of exercise and performance.
NASA Astrophysics Data System (ADS)
Mitasova, H.; Hardin, E. J.; Kratochvilova, A.; Landa, M.
2012-12-01
Multitemporal data acquired by modern mapping technologies provide unique insights into processes driving land surface dynamics. These high resolution data also offer an opportunity to improve the theoretical foundations and accuracy of process-based simulations of evolving landforms. We discuss development of new generation of visualization and analytics tools for GRASS GIS designed for 3D multitemporal data from repeated lidar surveys and from landscape process simulations. We focus on data and simulation methods that are based on point sampling of continuous fields and lead to representation of evolving surfaces as series of raster map layers or voxel models. For multitemporal lidar data we present workflows that combine open source point cloud processing tools with GRASS GIS and custom python scripts to model and analyze dynamics of coastal topography (Figure 1) and we outline development of coastal analysis toolbox. The simulations focus on particle sampling method for solving continuity equations and its application for geospatial modeling of landscape processes. In addition to water and sediment transport models, already implemented in GIS, the new capabilities under development combine OpenFOAM for wind shear stress simulation with a new module for aeolian sand transport and dune evolution simulations. Comparison of observed dynamics with the results of simulations is supported by a new, integrated 2D and 3D visualization interface that provides highly interactive and intuitive access to the redesigned and enhanced visualization tools. Several case studies will be used to illustrate the presented methods and tools and demonstrate the power of workflows built with FOSS and highlight their interoperability.Figure 1. Isosurfaces representing evolution of shoreline and a z=4.5m contour between the years 1997-2011at Cape Hatteras, NC extracted from a voxel model derived from series of lidar-based DEMs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air... point drains, mercury knock-out pots, and other closed mercury collection points a. At least once each...
Salas-Reyes, Isela Guadalupe; Arriaga-Jordán, Carlos Manuel; Rebollar-Rebollar, Samuel; García-Martínez, Anastacio; Albarrán-Portillo, Benito
2015-08-01
The objective of this study was to assess the sustainability of 10 dual-purpose cattle farms in a subtropical area of central Mexico. The IDEA method (Indicateurs de Durabilité des Exploitations Agricoles) was applied, which includes the agroecological, socio-territorial and economic scales (scores from 0 to 100 points per scale). A sample of 47 farms from a total of 91 registered in the local livestock growers association was analysed with principal component analysis and cluster analysis. From results, 10 farms were selected for the in-depth study herein reported, being the selection criterion continuous milk production throughout the year. Farms had a score of 88 and 86 points for the agroecological scale in the rainy and dry seasons. In the socio-territorial scale, scores were 73 points for both seasons, being the component of employment and services the strongest. Scores for the economic scale were 64 and 56 points for the rainy and dry seasons, respectively, when no economic cost for family labour is charged, which decreases to 59 and 45 points when an opportunity cost for family labour is considered. Dual-purpose farms in the subtropical area of central Mexico have a medium sustainability, with the economic scale being the limiting factor, and an area of opportunity.
A continuous function model for path prediction of entities
NASA Astrophysics Data System (ADS)
Nanda, S.; Pray, R.
2007-04-01
As militaries across the world continue to evolve, the roles of humans in various theatres of operation are being increasingly targeted by military planners for substitution with automation. Forward observation and direction of supporting arms to neutralize threats from dynamic adversaries is one such example. However, contemporary tracking and targeting systems are incapable of serving autonomously for they do not embody the sophisticated algorithms necessary to predict the future positions of adversaries with the accuracy offered by the cognitive and analytical abilities of human operators. The need for these systems to incorporate methods characterizing such intelligence is therefore compelling. In this paper, we present a novel technique to achieve this goal by modeling the path of an entity as a continuous polynomial function of multiple variables expressed as a Taylor series with a finite number of terms. We demonstrate the method for evaluating the coefficient of each term to define this function unambiguously for any given entity, and illustrate its use to determine the entity's position at any point in time in the future.
Continuously Deformation Monitoring of Subway Tunnel Based on Terrestrial Point Clouds
NASA Astrophysics Data System (ADS)
Kang, Z.; Tuo, L.; Zlatanova, S.
2012-07-01
The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the common control points can be used by each station and thus the error accumulation avoided within a section. Afterwards, the central axis of the subway tunnel is determined through RANSAC (Random Sample Consensus) algorithm and curve fitting. Although with very high resolution, laser points are still discrete and thus the vertical section is computed via the quadric fitting of the vicinity of interest, instead of the fitting of the whole model of a subway tunnel, which is determined by the intersection line rotated about the central axis of tunnel within a vertical plane. The extraction of the vertical section is then optimized using RANSAC for the purpose of filtering out noises. Based on the extracted vertical sections, the volume of tunnel deformation is estimated by the comparison between vertical sections extracted at the same position from different epochs of point clouds. Furthermore, the continuously extracted vertical sections are deployed to evaluate the convergent tendency of the tunnel. The proposed algorithms are verified using real datasets in terms of accuracy and computation efficiency. The experimental result of fitting accuracy analysis shows the maximum deviation between interpolated point and real point is 1.5 mm, and the minimum one is 0.1 mm; the convergent tendency of the tunnel was detected by the comparison of adjacent fitting radius. The maximum error is 6 mm, while the minimum one is 1 mm. The computation cost of vertical section abstraction is within 3 seconds/section, which proves high efficiency..
Han, Sung-Ho; Farshchi-Heydari, Salman; Hall, David J
2010-01-20
A novel time-domain optical method to reconstruct the relative concentration, lifetime, and depth of a fluorescent inclusion is described. We establish an analytical method for the estimations of these parameters for a localized fluorescent object directly from the simple evaluations of continuous wave intensity, exponential decay, and temporal position of the maximum of the fluorescence temporal point-spread function. Since the more complex full inversion process is not involved, this method permits a robust and fast processing in exploring the properties of a fluorescent inclusion. This method is confirmed by in vitro and in vivo experiments. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Reviving common standards in point-count surveys for broad inference across studies
Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.
2014-01-01
We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and analysis by regional and national data centers.
Dinç, Erdal; Özdemir, Nurten; Üstündağ, Özgür; Tilkan, Müşerref Günseli
2013-01-01
Dissolution testing has a very vital importance for a quality control test and prediction of the in vivo behavior of the oral dosage formulation. This requires the use of a powerful analytical method to get reliable, accurate and precise results for the dissolution experiments. In this context, new signal processing approaches, continuous wavelet transforms (CWTs) were improved for the simultaneous quantitative estimation and dissolution testing of lamivudine (LAM) and zidovudine (ZID) in a tablet dosage form. The CWT approaches are based on the application of the continuous wavelet functions to the absorption spectra-data vectors of LAM and ZID in the wavelet domain. After applying many wavelet functions, the families consisting of Mexican hat wavelet with the scaling factor a=256, Symlets wavelet with the scaling factor a=512 and the order of 5 and Daubechies wavelet at the scale factor a=450 and the order of 10 were found to be suitable for the quantitative determination of the mentioned drugs. These wavelet applications were named as mexh-CWT, sym5-CWT and db10-CWT methods. Calibration graphs for LAM and ZID in the working range of 2.0-50.0 µg/mL and 2.0-60.0 µg/mL were obtained measuring the mexh-CWT, sym5-CWT and db10-CWT amplitudes at the wavelength points corresponding to zero crossing points. The validity and applicability of the improved mexh-CWT, sym5-CWT and db10-CWT approaches was carried out by the analysis of the synthetic mixtures containing the analyzed drugs. Simultaneous determination of LAM and ZID in tablets was accomplished by the proposed CWT methods and their dissolution profiles were graphically explored.
Use of generalised additive models to categorise continuous variables in clinical prediction
2013-01-01
Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically significant differences being found between the two AUCs (p =0.079). The four-category proposal for PCO2 was ≤ 43;(43,52];(52,65];> 65, for which the following values were obtained: AIC=258.1 and AUC=0.81. No statistically significant differences were found between the AUC of the four-category option and that of the continuous predictor, which yielded an AIC of 250.3 and an AUC of 0.825 (p =0.115). Conclusions Our proposed method provides clinicians with the number and location of cut points for categorising variables, and performs as successfully as the original continuous predictor when it comes to developing clinical prediction rules. PMID:23802742
Study on launch scheme of space-net capturing system.
Gao, Qingyu; Zhang, Qingbin; Feng, Zhiwei; Tang, Qiangang
2017-01-01
With the continuous progress in active debris-removal technology, scientists are increasingly concerned about the concept of space-net capturing system. The space-net capturing system is a long-range-launch flexible capture system, which has great potential to capture non-cooperative targets such as inactive satellites and upper stages. In this work, the launch scheme is studied by experiment and simulation, including two-step ejection and multi-point-traction analyses. The numerical model of the tether/net is based on finite element method and is verified by full-scale ground experiment. The results of the ground experiment and numerical simulation show that the two-step ejection and six-point traction scheme of the space-net system is superior to the traditional one-step ejection and four-point traction launch scheme.
Study on launch scheme of space-net capturing system
Zhang, Qingbin; Feng, Zhiwei; Tang, Qiangang
2017-01-01
With the continuous progress in active debris-removal technology, scientists are increasingly concerned about the concept of space-net capturing system. The space-net capturing system is a long-range-launch flexible capture system, which has great potential to capture non-cooperative targets such as inactive satellites and upper stages. In this work, the launch scheme is studied by experiment and simulation, including two-step ejection and multi-point-traction analyses. The numerical model of the tether/net is based on finite element method and is verified by full-scale ground experiment. The results of the ground experiment and numerical simulation show that the two-step ejection and six-point traction scheme of the space-net system is superior to the traditional one-step ejection and four-point traction launch scheme. PMID:28877187
Xu, Jing; Wang, Zhongbin; Tan, Chao; Si, Lei; Liu, Xinhua
2015-01-01
In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method. PMID:26528985
Animals as Mobile Biological Sensors for Forest Fire Detection.
Sahin, Yasar Guneri
2007-12-04
This paper proposes a mobile biological sensor system that can assist in earlydetection of forest fires one of the most dreaded natural disasters on the earth. The main ideapresented in this paper is to utilize animals with sensors as Mobile Biological Sensors(MBS). The devices used in this system are animals which are native animals living inforests, sensors (thermo and radiation sensors with GPS features) that measure thetemperature and transmit the location of the MBS, access points for wireless communicationand a central computer system which classifies of animal actions. The system offers twodifferent methods, firstly: access points continuously receive data about animals' locationusing GPS at certain time intervals and the gathered data is then classified and checked tosee if there is a sudden movement (panic) of the animal groups: this method is called animalbehavior classification (ABC). The second method can be defined as thermal detection(TD): the access points get the temperature values from the MBS devices and send the datato a central computer to check for instant changes in the temperatures. This system may beused for many purposes other than fire detection, namely animal tracking, poachingprevention and detecting instantaneous animal death.
Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis
NASA Astrophysics Data System (ADS)
Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.
2014-04-01
A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.
Animals as Mobile Biological Sensors for Forest Fire Detection
2007-01-01
This paper proposes a mobile biological sensor system that can assist in early detection of forest fires one of the most dreaded natural disasters on the earth. The main idea presented in this paper is to utilize animals with sensors as Mobile Biological Sensors (MBS). The devices used in this system are animals which are native animals living in forests, sensors (thermo and radiation sensors with GPS features) that measure the temperature and transmit the location of the MBS, access points for wireless communication and a central computer system which classifies of animal actions. The system offers two different methods, firstly: access points continuously receive data about animals' location using GPS at certain time intervals and the gathered data is then classified and checked to see if there is a sudden movement (panic) of the animal groups: this method is called animal behavior classification (ABC). The second method can be defined as thermal detection (TD): the access points get the temperature values from the MBS devices and send the data to a central computer to check for instant changes in the temperatures. This system may be used for many purposes other than fire detection, namely animal tracking, poaching prevention and detecting instantaneous animal death. PMID:28903281
Multistate metadynamics for automatic exploration of conical intersections
NASA Astrophysics Data System (ADS)
Lindner, Joachim O.; Röhr, Merle I. S.; Mitrić, Roland
2018-05-01
We introduce multistate metadynamics for automatic exploration of conical intersection seams between adiabatic Born-Oppenheimer potential energy surfaces in molecular systems. By choosing the energy gap between the electronic states as a collective variable the metadynamics drives the system from an arbitrary ground-state configuration toward the intersection seam. Upon reaching the seam, the multistate electronic Hamiltonian is extended by introducing biasing potentials into the off-diagonal elements, and the molecular dynamics is continued on a modified potential energy surface obtained by diagonalization of the latter. The off-diagonal bias serves to locally open the energy gap and push the system to the next intersection point. In this way, the conical intersection energy landscape can be explored, identifying minimum energy crossing points and the barriers separating them. We illustrate the method on the example of furan, a prototype organic molecule exhibiting rich photophysics. The multistate metadynamics reveals plateaus on the conical intersection energy landscape from which the minimum energy crossing points with characteristic geometries can be extracted. The method can be combined with the broad spectrum of electronic structure methods and represents a generally applicable tool for the exploration of photophysics and photochemistry in complex molecules and materials.
ERIC Educational Resources Information Center
Salthouse, Timothy A.
2011-01-01
The commentaries on my article contain a number of points with which I disagree but also several with which I agree. For example, I continue to believe that the existence of many cases in which between-person variability does not increase with age indicates that greater variance with increased age is not inevitable among healthy individuals up to…
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Lin, W. C. W.
1980-01-01
A decoupling and pole-placement technique has been developed for the Annular Suspension and Pointing System (ASPS) of the Space Shuttle which uses bandwidths as performance criteria. The dynamics of the continuous-data ASPS allows the three degrees of freedom to be totally decoupled by state feedback through constant gains, so that the bandwidth of each degree of freedom can be independently specified without interaction. Although it is found that the digital ASPS cannot be completely decoupled, the bandwidth requirements are satisfied by pole placement and a trial-and-error method based on approximate decoupling.
A centroid molecular dynamics study of liquid para-hydrogen and ortho-deuterium.
Hone, Tyler D; Voth, Gregory A
2004-10-01
Centroid molecular dynamics (CMD) is applied to the study of collective and single-particle dynamics in liquid para-hydrogen at two state points and liquid ortho-deuterium at one state point. The CMD results are compared with the results of classical molecular dynamics, quantum mode coupling theory, a maximum entropy analytic continuation approach, pair-product forward- backward semiclassical dynamics, and available experimental results. The self-diffusion constants are in excellent agreement with the experimental measurements for all systems studied. Furthermore, it is shown that the method is able to adequately describe both the single-particle and collective dynamics of quantum liquids. (c) 2004 American Institute of Physics
Interactive CT-Video Registration for the Continuous Guidance of Bronchoscopy
Merritt, Scott A.; Khare, Rahul; Bascom, Rebecca
2014-01-01
Bronchoscopy is a major step in lung cancer staging. To perform bronchoscopy, the physician uses a procedure plan, derived from a patient’s 3D computed-tomography (CT) chest scan, to navigate the bronchoscope through the lung airways. Unfortunately, physicians vary greatly in their ability to perform bronchoscopy. As a result, image-guided bronchoscopy systems, drawing upon the concept of CT-based virtual bronchoscopy (VB), have been proposed. These systems attempt to register the bronchoscope’s live position within the chest to a CT-based virtual chest space. Recent methods, which register the bronchoscopic video to CT-based endoluminal airway renderings, show promise but do not enable continuous real-time guidance. We present a CT-video registration method inspired by computer-vision innovations in the fields of image alignment and image-based rendering. In particular, motivated by the Lucas–Kanade algorithm, we propose an inverse-compositional framework built around a gradient-based optimization procedure. We next propose an implementation of the framework suitable for image-guided bronchoscopy. Laboratory tests, involving both single frames and continuous video sequences, demonstrate the robustness and accuracy of the method. Benchmark timing tests indicate that the method can run continuously at 300 frames/s, well beyond the real-time bronchoscopic video rate of 30 frames/s. This compares extremely favorably to the ≥1 s/frame speeds of other methods and indicates the method’s potential for real-time continuous registration. A human phantom study confirms the method’s efficacy for real-time guidance in a controlled setting, and, hence, points the way toward the first interactive CT-video registration approach for image-guided bronchoscopy. Along this line, we demonstrate the method’s efficacy in a complete guidance system by presenting a clinical study involving lung cancer patients. PMID:23508260
Systems and methods for processing irradiation targets through a nuclear reactor
Dayal, Yogeshwar; Saito, Earl F.; Berger, John F.; Brittingham, Martin W.; Morales, Stephen K.; Hare, Jeffrey M.
2016-05-03
Apparatuses and methods produce radioisotopes in instrumentation tubes of operating commercial nuclear reactors. Irradiation targets may be inserted and removed from instrumentation tubes during operation and converted to radioisotopes otherwise unavailable during operation of commercial nuclear reactors. Example apparatuses may continuously insert, remove, and store irradiation targets to be converted to useable radioisotopes or other desired materials at several different origin and termination points accessible outside an access barrier such as a containment building, drywell wall, or other access restriction preventing access to instrumentation tubes during operation of the nuclear plant.
NASA Astrophysics Data System (ADS)
Qiu, Mo; Yu, Simin; Wen, Yuqiong; Lü, Jinhu; He, Jianbin; Lin, Zhuosheng
In this paper, a novel design methodology and its FPGA hardware implementation for a universal chaotic signal generator is proposed via the Verilog HDL fixed-point algorithm and state machine control. According to continuous-time or discrete-time chaotic equations, a Verilog HDL fixed-point algorithm and its corresponding digital system are first designed. In the FPGA hardware platform, each operation step of Verilog HDL fixed-point algorithm is then controlled by a state machine. The generality of this method is that, for any given chaotic equation, it can be decomposed into four basic operation procedures, i.e. nonlinear function calculation, iterative sequence operation, iterative values right shifting and ceiling, and chaotic iterative sequences output, each of which corresponds to only a state via state machine control. Compared with the Verilog HDL floating-point algorithm, the Verilog HDL fixed-point algorithm can save the FPGA hardware resources and improve the operation efficiency. FPGA-based hardware experimental results validate the feasibility and reliability of the proposed approach.
A study of the response of nonlinear springs
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Knott, T. W.; Johnson, E. R.
1991-01-01
The various phases to developing a methodology for studying the response of a spring-reinforced arch subjected to a point load are discussed. The arch is simply supported at its ends with both the spring and the point load assumed to be at midspan. The spring is present to off-set the typical snap through behavior normally associated with arches, and to provide a structure that responds with constant resistance over a finite displacement. The various phases discussed consist of the following: (1) development of the closed-form solution for the shallow arch case; (2) development of a finite difference analysis to study (shallow) arches; and (3) development of a finite element analysis for studying more general shallow and nonshallow arches. The two numerical analyses rely on a continuation scheme to move the solution past limit points, and to move onto bifurcated paths, both characteristics being common to the arch problem. An eigenvalue method is used for a continuation scheme. The finite difference analysis is based on a mixed formulation (force and displacement variables) of the governing equations. The governing equations for the mixed formulation are in first order form, making the finite difference implementation convenient. However, the mixed formulation is not well-suited for the eigenvalue continuation scheme. This provided the motivation for the displacement based finite element analysis. Both the finite difference and the finite element analyses are compared with the closed form shallow arch solution. Agreement is excellent, except for the potential problems with the finite difference analysis and the continuation scheme. Agreement between the finite element analysis and another investigator's numerical analysis for deep arches is also good.
Tsafrir, D; Tsafrir, I; Ein-Dor, L; Zuk, O; Notterman, D A; Domany, E
2005-05-15
We introduce a novel unsupervised approach for the organization and visualization of multidimensional data. At the heart of the method is a presentation of the full pairwise distance matrix of the data points, viewed in pseudocolor. The ordering of points is iteratively permuted in search of a linear ordering, which can be used to study embedded shapes. Several examples indicate how the shapes of certain structures in the data (elongated, circular and compact) manifest themselves visually in our permuted distance matrix. It is important to identify the elongated objects since they are often associated with a set of hidden variables, underlying continuous variation in the data. The problem of determining an optimal linear ordering is shown to be NP-Complete, and therefore an iterative search algorithm with O(n3) step-complexity is suggested. By using sorting points into neighborhoods, i.e. SPIN to analyze colon cancer expression data we were able to address the serious problem of sample heterogeneity, which hinders identification of metastasis related genes in our data. Our methodology brings to light the continuous variation of heterogeneity--starting with homogeneous tumor samples and gradually increasing the amount of another tissue. Ordering the samples according to their degree of contamination by unrelated tissue allows the separation of genes associated with irrelevant contamination from those related to cancer progression. Software package will be available for academic users upon request.
Duc, Myriam; Gaboriaud, Fabien; Thomas, Fabien
2005-09-01
The effects of experimental procedures on the acid-base consumption titration curves of montmorillonite suspension were studied using continuous potentiometric titration. For that purpose, the hysteresis amplitudes between the acid and base branches were found to be useful to systematically evaluate the impacts of storage conditions (wet or dried), the atmosphere in titration reactor, the solid-liquid ratio, the time interval between successive increments, and the ionic strength. In the case of storage conditions, the increase of the hysteresis was significantly higher for longer storage of clay in suspension and drying procedures compared to "fresh" clay suspension. The titration carried out under air demonstrated carbonate contamination that could only be cancelled by performing experiments under inert gas. Interestingly, the increase of the time intervals between successive increments of titrant strongly emphasized the amplitude of hysteresis, which could be correlated with the slow kinetic process specifically observed for acid addition in acid media. Thus, such kinetic behavior is probably associated with dissolution processes of clay particles. However, the resulting curves recorded at different ionic strengths under optimized conditions did not show the common intersection point required to define point of zero charge. Nevertheless, the ionic strength dependence of the point of zero net proton charge suggested that the point of zero charge of sodic montmorillonite could be estimated as lower than 5.
Zanderigo, Francesca; Sparacino, Giovanni; Kovatchev, Boris; Cobelli, Claudio
2007-09-01
The aim of this article was to use continuous glucose error-grid analysis (CG-EGA) to assess the accuracy of two time-series modeling methodologies recently developed to predict glucose levels ahead of time using continuous glucose monitoring (CGM) data. We considered subcutaneous time series of glucose concentration monitored every 3 minutes for 48 hours by the minimally invasive CGM sensor Glucoday® (Menarini Diagnostics, Florence, Italy) in 28 type 1 diabetic volunteers. Two prediction algorithms, based on first-order polynomial and autoregressive (AR) models, respectively, were considered with prediction horizons of 30 and 45 minutes and forgetting factors (ff) of 0.2, 0.5, and 0.8. CG-EGA was used on the predicted profiles to assess their point and dynamic accuracies using original CGM profiles as reference. Continuous glucose error-grid analysis showed that the accuracy of both prediction algorithms is overall very good and that their performance is similar from a clinical point of view. However, the AR model seems preferable for hypoglycemia prevention. CG-EGA also suggests that, irrespective of the time-series model, the use of ff = 0.8 yields the highest accurate readings in all glucose ranges. For the first time, CG-EGA is proposed as a tool to assess clinically relevant performance of a prediction method separately at hypoglycemia, euglycemia, and hyperglycemia. In particular, we have shown that CG-EGA can be helpful in comparing different prediction algorithms, as well as in optimizing their parameters.
Abbasi Tarighat, Maryam; Nabavi, Masoume; Mohammadizadeh, Mohammad Reza
2015-06-15
A new multi-component analysis method based on zero-crossing point-continuous wavelet transformation (CWT) was developed for simultaneous spectrophotometric determination of Cu(2+) and Pb(2+) ions based on the complex formation with 2-benzyl espiro[isoindoline-1,5 oxasolidine]-2,3,4 trione (BSIIOT). The absorption spectra were evaluated with respect to synthetic ligand concentration, time of complexation and pH. Therefore according the absorbance values, 0.015 mmol L(-1) BSIIOT, 10 min after mixing and pH 8.0 were used as optimum values. The complex formation between BSIIOT ligand and the cations Cu(2+) and Pb(2+) by application of rank annihilation factor analysis (RAFA) were investigated. Daubechies-4 (db4), discrete Meyer (dmey), Morlet (morl) and Symlet-8 (sym8) continuous wavelet transforms for signal treatments were found to be suitable among the wavelet families. The applicability of new synthetic ligand and selected mother wavelets were used for the simultaneous determination of strongly overlapped spectra of species without using any pre-chemical treatment. Therefore, CWT signals together with zero crossing technique were directly applied to the overlapping absorption spectra of Cu(2+) and Pb(2+). The calibration graphs for estimation of Pb(2+) and Cu (2+)were obtained by measuring the CWT amplitudes at zero crossing points for Cu(2+) and Pb(2+) at the wavelet domain, respectively. The proposed method was validated by simultaneous determination of Cu(2+) and Pb(2+) ions in red beans, walnut, rice, tea and soil samples. The obtained results of samples with proposed method have been compared with those predicted by partial least squares (PLS) and flame atomic absorption spectrophotometry (FAAS). Copyright © 2015 Elsevier B.V. All rights reserved.
Novel Analytic Methods Needed for Real-Time Continuous Core Body Temperature Data
Hertzberg, Vicki; Mac, Valerie; Elon, Lisa; Mutic, Nathan; Mutic, Abby; Peterman, Katherine; Tovar-Aguilar, J. Antonio; Economos, Jeannie; Flocks, Joan; McCauley, Linda
2017-01-01
Affordable measurement of core body temperature, Tc, in a continuous, real-time fashion is now possible. With this advance comes a new data analysis paradigm for occupational epidemiology. We characterize issues arising after obtaining Tc data over 188 workdays for 83 participating farmworkers, a population vulnerable to effects of rising temperatures due to climate change. We describe a novel approach to these data using smoothing and functional data analysis. This approach highlights different data aspects compared to describing Tc at a single time point or summaries of the time course into an indicator function (e.g., did Tc ever exceed 38°C, the threshold limit value for occupational heat exposure). Participants working in ferneries had significantly higher Tc at some point during the workday compared to those working in nurseries, despite a shorter workday for fernery participants. Our results typify the challenges and opportunities in analyzing big data streams from real-time physiologic monitoring. PMID:27756853
Interpolation of longitudinal shape and image data via optimal mass transport
NASA Astrophysics Data System (ADS)
Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen
2014-03-01
Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.
Shi, Jinjie; Yazdi, Shahrzad; Lin, Sz-Chin Steven; Ding, Xiaoyun; Chiang, I-Kao; Sharp, Kendra; Huang, Tony Jun
2011-07-21
Three-dimensional (3D) continuous microparticle focusing has been achieved in a single-layer polydimethylsiloxane (PDMS) microfluidic channel using a standing surface acoustic wave (SSAW). The SSAW was generated by the interference of two identical surface acoustic waves (SAWs) created by two parallel interdigital transducers (IDTs) on a piezoelectric substrate with a microchannel precisely bonded between them. To understand the working principle of the SSAW-based 3D focusing and investigate the position of the focal point, we computed longitudinal waves, generated by the SAWs and radiated into the fluid media from opposite sides of the microchannel, and the resultant pressure and velocity fields due to the interference and reflection of the longitudinal waves. Simulation results predict the existence of a focusing point which is in good agreement with our experimental observations. Compared with other 3D focusing techniques, this method is non-invasive, robust, energy-efficient, easy to implement, and applicable to nearly all types of microparticles.
Novel Analytic Methods Needed for Real-Time Continuous Core Body Temperature Data.
Hertzberg, Vicki; Mac, Valerie; Elon, Lisa; Mutic, Nathan; Mutic, Abby; Peterman, Katherine; Tovar-Aguilar, J Antonio; Economos, Eugenia; Flocks, Joan; McCauley, Linda
2016-10-18
Affordable measurement of core body temperature (T c ) in a continuous, real-time fashion is now possible. With this advance comes a new data analysis paradigm for occupational epidemiology. We characterize issues arising after obtaining T c data over 188 workdays for 83 participating farmworkers, a population vulnerable to effects of rising temperatures due to climate change. We describe a novel approach to these data using smoothing and functional data analysis. This approach highlights different data aspects compared with describing T c at a single time point or summaries of the time course into an indicator function (e.g., did T c ever exceed 38 °C, the threshold limit value for occupational heat exposure). Participants working in ferneries had significantly higher T c at some point during the workday compared with those working in nurseries, despite a shorter workday for fernery participants. Our results typify the challenges and opportunities in analyzing big data streams from real-time physiologic monitoring. © The Author(s) 2016.
Non-destructive scanning for applied stress by the continuous magnetic Barkhausen noise method
NASA Astrophysics Data System (ADS)
Franco Grijalba, Freddy A.; Padovese, L. R.
2018-01-01
This paper reports the use of a non-destructive continuous magnetic Barkhausen noise technique to detect applied stress on steel surfaces. The stress profile generated in a sample of 1070 steel subjected to a three-point bending test is analyzed. The influence of different parameters such as pickup coil type, scanner speed, applied magnetic field and frequency band analyzed on the effectiveness of the technique is investigated. A moving smoothing window based on a second-order statistical moment is used to analyze the time signal. The findings show that the technique can be used to detect applied stress profiles.
Continuous Modeling of Calcium Transport Through Biological Membranes
NASA Astrophysics Data System (ADS)
Jasielec, J. J.; Filipek, R.; Szyszkiewicz, K.; Sokalski, T.; Lewenstam, A.
2016-08-01
In this work an approach to the modeling of the biological membranes where a membrane is treated as a continuous medium is presented. The Nernst-Planck-Poisson model including Poisson equation for electric potential is used to describe transport of ions in the mitochondrial membrane—the interface which joins mitochondrial matrix with cellular cytosis. The transport of calcium ions is considered. Concentration of calcium inside the mitochondrion is not known accurately because different analytical methods give dramatically different results. We explain mathematically these differences assuming the complexing reaction inside mitochondrion and the existence of the calcium set-point (concentration of calcium in cytosis below which calcium stops entering the mitochondrion).
Some estimation formulae for continuous time-invariant linear systems
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Sidhu, G. S.
1975-01-01
In this brief paper we examine a Riccati equation decomposition due to Reid and Lainiotis and apply the result to the continuous time-invariant linear filtering problem. Exploitation of the time-invariant structure leads to integration-free covariance recursions which are of use in covariance analyses and in filter implementations. A super-linearly convergent iterative solution to the algebraic Riccati equation (ARE) is developed. The resulting algorithm, arranged in a square-root form, is thought to be numerically stable and competitive with other ARE solution methods. Certain covariance relations that are relevant to the fixed-point and fixed-lag smoothing problems are also discussed.
LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics
NASA Astrophysics Data System (ADS)
Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel
2017-10-01
Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.
Trail resource impacts and an examination of alternative assessment techniques
Marion, J.L.; Leung, Y.-F.
2001-01-01
Trails are a primary recreation resource facility on which recreation activities are performed. They provide safe access to non-roaded areas, support recreational opportunities such as hiking, biking, and wildlife observation, and protect natural resources by concentrating visitor traffic on resistant treads. However, increasing recreational use, coupled with poorly designed and/or maintained trails, has led to a variety of resource impacts. Trail managers require objective information on trails and their conditions to monitor trends, direct trail maintenance efforts, and evaluate the need for visitor management and resource protection actions. This paper reviews trail impacts and different types of trail assessments, including inventory, maintenance, and condition assessment approaches. Two assessment methods, point sampling and problem assessment, are compared empirically from separate assessments of a 15-mile segment of the Appalachian Trail in Great Smoky Mountains National Park. Results indicate that point sampling and problem assessment methods yield distinctly different types of quantitative information. The point sampling method provides more accurate and precise measures of trail characteristics that are continuous or frequent (e.g., tread width or exposed soil). The problem assessment method is a preferred approach for monitoring trail characteristics that can be easily predefined or are infrequent (e.g., excessive width or secondary treads), particularly when information on the location of specific trail impact problems is needed. The advantages and limitations of these two assessment methods are examined in relation to various management and research information needs. The choice and utility of these assessment methods are also discussed.
An improved exploratory search technique for pure integer linear programming problems
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1990-01-01
The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.
Prolonged in vivo imaging of Xenopus laevis.
Hamilton, Paul W; Henry, Jonathan J
2014-08-01
While live imaging of embryonic development over long periods of time is a well established method for embryos of the frog Xenopus laevis, once development has progressed to the swimming stages, continuous live imaging becomes more challenging because the tadpoles must be immobilized. Current imaging techniques for these advanced stages generally require bringing the tadpoles in and out of anesthesia for short imaging sessions at selected time points, severely limiting the resolution of the data. Here we demonstrate that creating a constant flow of diluted tricaine methanesulfonate (MS-222) over a tadpole greatly improves their survival under anesthesia. Based on this result, we describe a new method for imaging stage 48 to 65 X. laevis, by circulating the anesthetic using a peristaltic pump. This supports the animal during continuous live imaging sessions for at least 48 hr. The addition of a stable optical window allows for high quality imaging through the anesthetic solution. This automated imaging system provides for the first time a method for continuous observations of developmental and regenerative processes in advanced stages of Xenopus over 2 days. Developmental Dynamics 243:1011-1019, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Gabellani, S.; Silvestro, F.; Rudari, R.; Boni, G.
2008-12-01
Flood forecasting undergoes a constant evolution, becoming more and more demanding about the models used for hydrologic simulations. The advantages of developing distributed or semi-distributed models have currently been made clear. Now the importance of using continuous distributed modeling emerges. A proper schematization of the infiltration process is vital to these types of models. Many popular infiltration schemes, reliable and easy to implement, are too simplistic for the development of continuous hydrologic models. On the other hand, the unavailability of detailed and descriptive information on soil properties often limits the implementation of complete infiltration schemes. In this work, a combination between the Soil Conservation Service Curve Number method (SCS-CN) and a method derived from Horton equation is proposed in order to overcome the inherent limits of the two schemes. The SCS-CN method is easily applicable on large areas, but has structural limitations. The Horton-like methods present parameters that, though measurable to a point, are difficult to achieve a reliable estimate at catchment scale. The objective of this work is to overcome these limits by proposing a calibration procedure which maintains the large applicability of the SCS-CN method as well as the continuous description of the infiltration process given by the Horton's equation suitably modified. The estimation of the parameters of the modified Horton method is carried out using a formal analogy with the SCS-CN method under specific conditions. Some applications, at catchment scale within a distributed model, are presented.
Development, history, and future of automated cell counters.
Green, Ralph; Wachsmann-Hogiu, Sebastian
2015-03-01
Modern automated hematology instruments use either optical methods (light scatter), impedance-based methods based on the Coulter principle (changes in electrical current induced by blood cells flowing through an electrically charged opening), or a combination of both optical and impedance-based methods. Progressive improvement in these instruments has allowed the enumeration and evaluation of blood cells with great accuracy, precision, and speed at very low cost. Future directions of hematology instrumentation include the addition of new parameters and the development of point-of-care instrumentation. In the future, in-vivo analysis of blood cells may allow noninvasive and near-continuous measurements. Copyright © 2015 Elsevier Inc. All rights reserved.
Method for high specific bioproductivity of .alpha.,.omega.-alkanedicarboxylic acids
Mobley, David Paul; Shank, Gary Keith
2000-01-01
This invention provides a low-cost method of producing .alpha.,.omega.-alkanedicarboxylic acids. Particular bioconversion conditions result in highly efficient conversion of fatty acid, fatty acid ester, or alkane substrates to diacids. Candida tropicalis AR40 or similar yeast strains are grown in a medium containing a carbon source and a nitrogen source at a temperature of 31.degree. C. to 38.degree. C., while additional carbon source is continuously added, until maximum cell growth is attained. Within 0-3 hours of this point, substrate is added to the culture to initiate conversion. An .alpha.,.omega.-alkanedicarboxylic acid made according to this method is also provided.
NASA Technical Reports Server (NTRS)
Avila, Edwin M. Martinez; Muniz, Ricardo; Szafran, Jamie; Dalton, Adam
2011-01-01
Lines of code (LOC) analysis is one of the methods used to measure programmer productivity and estimate schedules of programming projects. The Launch Control System (LCS) had previously used this method to estimate the amount of work and to plan development efforts. The disadvantage of using LOC as a measure of effort is that one can only measure 30% to 35% of the total effort of software projects involves coding [8]. In the application, instead of using the LOC we are using function point for a better estimation of hours in each software to develop. Because of these disadvantages, Jamie Szafran of the System Software Branch of Control And Data Systems (NE-C3) at Kennedy Space Canter developed a web application called Function Point Analysis (FPA) Depot. The objective of this web application is that the LCS software architecture team can use the data to more accurately estimate the effort required to implement customer requirements. This paper describes the evolution of the domain model used for function point analysis as project managers continually strive to generate more accurate estimates.
Solution of the equations for one-dimensional, two-phase, immiscible flow by geometric methods
NASA Astrophysics Data System (ADS)
Boronin, Ivan; Shevlyakov, Andrey
2018-03-01
Buckley-Leverett equations describe non viscous, immiscible, two-phase filtration, which is often of interest in modelling of oil production. For many parameters and initial conditions, the solutions of these equations exhibit non-smooth behaviour, namely discontinuities in form of shock waves. In this paper we obtain a novel method for the solution of Buckley-Leverett equations, which is based on geometry of differential equations. This method is fast, accurate, stable, and describes non-smooth phenomena. The main idea of the method is that classic discontinuous solutions correspond to the continuous surfaces in the space of jets - the so-called multi-valued solutions (Bocharov et al., Symmetries and conservation laws for differential equations of mathematical physics. American Mathematical Society, Providence, 1998). A mapping of multi-valued solutions from the jet space onto the plane of the independent variables is constructed. This mapping is not one-to-one, and its singular points form a curve on the plane of the independent variables, which is called the caustic. The real shock occurs at the points close to the caustic and is determined by the Rankine-Hugoniot conditions.
Method for adding nodes to a quantum key distribution system
Grice, Warren P
2015-02-24
An improved quantum key distribution (QKD) system and method are provided. The system and method introduce new clients at intermediate points along a quantum channel, where any two clients can establish a secret key without the need for a secret meeting between the clients. The new clients perform operations on photons as they pass through nodes in the quantum channel, and participate in a non-secret protocol that is amended to include the new clients. The system and method significantly increase the number of clients that can be supported by a conventional QKD system, with only a modest increase in cost. The system and method are compatible with a variety of QKD schemes, including polarization, time-bin, continuous variable and entanglement QKD.
["Science among the Ottoman Turks" from the critical point of view].
Ada, Turhan
According to Dr. Adivar, the positive science in Ottoman Turkey was only sometimes deficient and sometimes wrong continuation of the science in Persian and Arabic languages and was no more different from the transition of the Greek miracle to East neither on account of content or method until the XIX. Century, besides if there were ant phases in which these sciences were innovated, they will be importantly specified.
NASA Astrophysics Data System (ADS)
Nishimaru, Momoko; Nakasa, Miku; Kudo, Shoji; Takiyama, Hiroshi
2017-07-01
Crystallization operation of cocrystal production has deposition risk of undesired crystals. Simultaneously, continuous manufacturing processes are focused on. In this study, conditions for continuous cocrystallization considering risk reduction of undesired crystals deposition were investigated on the view point of thermodynamics and kinetics. The anti-solvent cocrystallization was carried out in four-component system of carbamazepine, saccharin, methanol and water. From the preliminary batch experiment, the relationships among undesired crystal deposition, solution composition decided by mixing ratio of solutions, and residence time for the crystals were considered, and then the conditions of continuous experiment were decided. Under these conditions, the continuous experiment was carried out. The XRD patterns of obtained crystals in the continuous experiment showed that desired cocrystals were obtained without undesired crystals. This experimental result was evaluated by using multi-component phase diagrams from the view point of the operation point's movement. From the evaluation, it was found that there is a certain operation condition which the operation point is fixed with time in the specific domain without the deposition risk of undesired single component crystals. It means the possibility of continuous production of cocrystals without deposition risk of undesired crystals was confirmed by using multi-component phase diagrams.
Gridless, pattern-driven point cloud completion and extension
NASA Astrophysics Data System (ADS)
Gravey, Mathieu; Mariethoz, Gregoire
2016-04-01
While satellites offer Earth observation with a wide coverage, other remote sensing techniques such as terrestrial LiDAR can acquire very high-resolution data on an area that is limited in extension and often discontinuous due to shadow effects. Here we propose a numerical approach to merge these two types of information, thereby reconstructing high-resolution data on a continuous large area. It is based on a pattern matching process that completes the areas where only low-resolution data is available, using bootstrapped high-resolution patterns. Currently, the most common approach to pattern matching is to interpolate the point data on a grid. While this approach is computationally efficient, it presents major drawbacks for point clouds processing because a significant part of the information is lost in the point-to-grid resampling, and that a prohibitive amount of memory is needed to store large grids. To address these issues, we propose a gridless method that compares point clouds subsets without the need to use a grid. On-the-fly interpolation involves a heavy computational load, which is met by using a GPU high-optimized implementation and a hierarchical pattern searching strategy. The method is illustrated using data from the Val d'Arolla, Swiss Alps, where high-resolution terrestrial LiDAR data are fused with lower-resolution Landsat and WorldView-3 acquisitions, such that the density of points is homogeneized (data completion) and that it is extend to a larger area (data extension).
Hey, Hwee Weng Dennis; Ng, Li Wen Nathaniel; Ng, Yau Hong; Sng, Wei Zhong Jonathan; Manohara, Ruben; Thambiah, Joseph Shanthakumar
2016-06-01
Proximal tibiofibular joint (PTFJ) injuries are not uncommon but relatively understudied. This study evaluates the effectiveness of 2 radiographic methods in assessing the integrity of the PTFJ. This is a cross-sectional study of 2984 consecutive patients with knee X-rays done in a single institution over a 4-month period. A total of 5968 knee X-rays were assessed using 2 methods-[1] The direction in which the fibula points to in relation to the lateral femoral epicondyle on anteroposterior view and Blumensaat line on lateral view. [2] The degree of tibiofibular overlap as percentage of widest portion of the fibula head. Sensitivity and specificity of these methods in diagnosing a disrupted PTFJ are calculated. Variables including quality of X-rays, weight-bearing status of AP views and degree of knee flexion on lateral views are also recorded. Univariate analysis was carried out to investigate the association between variables using chi-square test for nominal data and student t-test for continuous data. The fibular points towards the lateral femoral epicondyle on AP view in 94.4% of the patients and points towards the posterior half of the Blumensaat line on lateral view in 98.1% of the patients. Using this method, weight-bearing X-rays are significantly associated with the direction the fibula is pointing (p<0.01) on the AP view and the degree of knee flexion is associated with the direction the fibula is pointing (p<0.01) on the lateral view. The AP tibiofibular overlap ranges from >0% to <75% in 94.1% of the patients and the lateral tibiofibular overlap ranges from >0% to <75% in 84.5% of the patients. This method is associated with whether true orthogonal X-rays of the knees are taken (p=0.048). The direction in which the fibula is pointing and the percentage of tibiofibular overlap are highly specific radiographic methods useful in defining the PTFJ. The first method requires a weight-bearing view on AP assessment and >20 degrees of flexion on lateral assessment. True orthogonal AP and lateral views are required for the second method to be used. Copyright © 2016 Elsevier Ltd. All rights reserved.
Xu, Gongxian; Liu, Ying; Gao, Qunwang
2016-02-10
This paper deals with multi-objective optimization of continuous bio-dissimilation process of glycerol to 1, 3-propanediol. In order to maximize the production rate of 1, 3-propanediol, maximize the conversion rate of glycerol to 1, 3-propanediol, maximize the conversion rate of glycerol, and minimize the concentration of by-product ethanol, we first propose six new multi-objective optimization models that can simultaneously optimize any two of the four objectives above. Then these multi-objective optimization problems are solved by using the weighted-sum and normal-boundary intersection methods respectively. Both the Pareto filter algorithm and removal criteria are used to remove those non-Pareto optimal points obtained by the normal-boundary intersection method. The results show that the normal-boundary intersection method can successfully obtain the approximate Pareto optimal sets of all the proposed multi-objective optimization problems, while the weighted-sum approach cannot achieve the overall Pareto optimal solutions of some multi-objective problems. Copyright © 2015 Elsevier B.V. All rights reserved.
Horn, W; Miksch, S; Egghart, G; Popow, C; Paky, F
1997-09-01
Real-time systems for monitoring and therapy planning, which receive their data from on-line monitoring equipment and computer-based patient records, require reliable data. Data validation has to utilize and combine a set of fast methods to detect, eliminate, and repair faulty data, which may lead to life-threatening conclusions. The strength of data validation results from the combination of numerical and knowledge-based methods applied to both continuously-assessed high-frequency data and discontinuously-assessed data. Dealing with high-frequency data, examining single measurements is not sufficient. It is essential to take into account the behavior of parameters over time. We present time-point-, time-interval-, and trend-based methods for validation and repair. These are complemented by time-independent methods for determining an overall reliability of measurements. The data validation benefits from the temporal data-abstraction process, which provides automatically derived qualitative values and patterns. The temporal abstraction is oriented on a context-sensitive and expectation-guided principle. Additional knowledge derived from domain experts forms an essential part for all of these methods. The methods are applied in the field of artificial ventilation of newborn infants. Examples from the real-time monitoring and therapy-planning system VIE-VENT illustrate the usefulness and effectiveness of the methods.
Tipping points in the arctic: eyeballing or statistical significance?
Carstensen, Jacob; Weydmann, Agata
2012-02-01
Arctic ecosystems have experienced and are projected to experience continued large increases in temperature and declines in sea ice cover. It has been hypothesized that small changes in ecosystem drivers can fundamentally alter ecosystem functioning, and that this might be particularly pronounced for Arctic ecosystems. We present a suite of simple statistical analyses to identify changes in the statistical properties of data, emphasizing that changes in the standard error should be considered in addition to changes in mean properties. The methods are exemplified using sea ice extent, and suggest that the loss rate of sea ice accelerated by factor of ~5 in 1996, as reported in other studies, but increases in random fluctuations, as an early warning signal, were observed already in 1990. We recommend to employ the proposed methods more systematically for analyzing tipping points to document effects of climate change in the Arctic.
Zhao, Y J; Liu, Y; Sun, Y C; Wang, Y
2017-08-18
To explore a three-dimensional (3D) data fusion and integration method of optical scanning tooth crowns and cone beam CT (CBCT) reconstructing tooth roots for their natural transition in the 3D profile. One mild dental crowding case was chosen from orthodontics clinics with full denture. The CBCT data were acquired to reconstruct the dental model with tooth roots by Mimics 17.0 medical imaging software, and the optical impression was taken to obtain the dentition model with high precision physiological contour of crowns by Smart Optics dental scanner. The two models were doing 3D registration based on their common part of the crowns' shape in Geomagic Studio 2012 reverse engineering software. The model coordinate system was established by defining the occlusal plane. crown-gingiva boundary was extracted from optical scanning model manually, then crown-root boundary was generated by offsetting and projecting crown-gingiva boundary to the root model. After trimming the crown and root models, the 3D fusion model with physiological contour crown and nature root was formed by curvature continuity filling algorithm finally. In the study, 10 patients with dentition mild crowded from the oral clinics were followed up with this method to obtain 3D crown and root fusion models, and 10 high qualification doctors were invited to do subjective evaluation of these fusion models. This study based on commercial software platform, preliminarily realized the 3D data fusion and integration method of optical scanning tooth crowns and CBCT tooth roots with a curvature continuous shape transition. The 10 patients' 3D crown and root fusion models were constructed successfully by the method, and the average score of the doctors' subjective evaluation for these 10 models was 8.6 points (0-10 points). which meant that all the fusion models could basically meet the need of the oral clinics, and also showed the method in our study was feasible and efficient in orthodontics study and clinics. The method of this study for 3D crown and root data fusion could obtain an integrate tooth or dental model more close to the nature shape. CBCT model calibration may probably improve the precision of the fusion model. The adaptation of this method for severe dentition crowding and micromaxillary deformity needs further research.
Bayesian functional integral method for inferring continuous data from discrete measurements.
Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul
2012-02-08
Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Wave function continuity and the diagonal Born-Oppenheimer correction at conical intersections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meek, Garrett A.; Levine, Benjamin G., E-mail: levine@chemistry.msu.edu
2016-05-14
We demonstrate that though exact in principle, the expansion of the total molecular wave function as a sum over adiabatic Born-Oppenheimer (BO) vibronic states makes inclusion of the second-derivative nonadiabatic energy term near conical intersections practically problematic. In order to construct a well-behaved molecular wave function that has density at a conical intersection, the individual BO vibronic states in the summation must be discontinuous. When the second-derivative nonadiabatic terms are added to the Hamiltonian, singularities in the diagonal BO corrections (DBOCs) of the individual BO states arise from these discontinuities. In contrast to the well-known singularities in the first-derivative couplingsmore » at conical intersections, these singularities are non-integrable, resulting in undefined DBOC matrix elements. Though these singularities suggest that the exact molecular wave function may not have density at the conical intersection point, there is no physical basis for this constraint. Instead, the singularities are artifacts of the chosen basis of discontinuous functions. We also demonstrate that continuity of the total molecular wave function does not require continuity of the individual adiabatic nuclear wave functions. We classify nonadiabatic molecular dynamics methods according to the constraints placed on wave function continuity and analyze their formal properties. Based on our analysis, it is recommended that the DBOC be neglected when employing mixed quantum-classical methods and certain approximate quantum dynamical methods in the adiabatic representation.« less
Wave function continuity and the diagonal Born-Oppenheimer correction at conical intersections
NASA Astrophysics Data System (ADS)
Meek, Garrett A.; Levine, Benjamin G.
2016-05-01
We demonstrate that though exact in principle, the expansion of the total molecular wave function as a sum over adiabatic Born-Oppenheimer (BO) vibronic states makes inclusion of the second-derivative nonadiabatic energy term near conical intersections practically problematic. In order to construct a well-behaved molecular wave function that has density at a conical intersection, the individual BO vibronic states in the summation must be discontinuous. When the second-derivative nonadiabatic terms are added to the Hamiltonian, singularities in the diagonal BO corrections (DBOCs) of the individual BO states arise from these discontinuities. In contrast to the well-known singularities in the first-derivative couplings at conical intersections, these singularities are non-integrable, resulting in undefined DBOC matrix elements. Though these singularities suggest that the exact molecular wave function may not have density at the conical intersection point, there is no physical basis for this constraint. Instead, the singularities are artifacts of the chosen basis of discontinuous functions. We also demonstrate that continuity of the total molecular wave function does not require continuity of the individual adiabatic nuclear wave functions. We classify nonadiabatic molecular dynamics methods according to the constraints placed on wave function continuity and analyze their formal properties. Based on our analysis, it is recommended that the DBOC be neglected when employing mixed quantum-classical methods and certain approximate quantum dynamical methods in the adiabatic representation.
Wave function continuity and the diagonal Born-Oppenheimer correction at conical intersections.
Meek, Garrett A; Levine, Benjamin G
2016-05-14
We demonstrate that though exact in principle, the expansion of the total molecular wave function as a sum over adiabatic Born-Oppenheimer (BO) vibronic states makes inclusion of the second-derivative nonadiabatic energy term near conical intersections practically problematic. In order to construct a well-behaved molecular wave function that has density at a conical intersection, the individual BO vibronic states in the summation must be discontinuous. When the second-derivative nonadiabatic terms are added to the Hamiltonian, singularities in the diagonal BO corrections (DBOCs) of the individual BO states arise from these discontinuities. In contrast to the well-known singularities in the first-derivative couplings at conical intersections, these singularities are non-integrable, resulting in undefined DBOC matrix elements. Though these singularities suggest that the exact molecular wave function may not have density at the conical intersection point, there is no physical basis for this constraint. Instead, the singularities are artifacts of the chosen basis of discontinuous functions. We also demonstrate that continuity of the total molecular wave function does not require continuity of the individual adiabatic nuclear wave functions. We classify nonadiabatic molecular dynamics methods according to the constraints placed on wave function continuity and analyze their formal properties. Based on our analysis, it is recommended that the DBOC be neglected when employing mixed quantum-classical methods and certain approximate quantum dynamical methods in the adiabatic representation.
Extremal Correlators in the Ads/cft Correspondence
NASA Astrophysics Data System (ADS)
D'Hoker, Eric; Freedman, Daniel Z.; Mathur, Samir D.; Matusis, Alec; Rastelli, Leonardo
The non-renormalization of the 3-point functions
40 CFR 264.95 - Point of compliance.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 264.95 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE TREATMENT, STORAGE, AND DISPOSAL FACILITIES Releases From Solid Waste Management Units § 264.95 Point of compliance. (a) The Regional Administrator will...
40 CFR 264.95 - Point of compliance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 264.95 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE TREATMENT, STORAGE, AND DISPOSAL FACILITIES Releases From Solid Waste Management Units § 264.95 Point of compliance. (a) The Regional Administrator will...
Quality assurance of a gimbaled head swing verification using feature point tracking.
Miura, Hideharu; Ozawa, Shuichi; Enosaki, Tsubasa; Kawakubo, Atsushi; Hosono, Fumika; Yamada, Kiyoshi; Nagata, Yasushi
2017-01-01
To perform dynamic tumor tracking (DTT) for clinical applications safely and accurately, gimbaled head swing verification is important. We propose a quantitative gimbaled head swing verification method for daily quality assurance (QA), which uses feature point tracking and a web camera. The web camera was placed on a couch at the same position for every gimbaled head swing verification, and could move based on a determined input function (sinusoidal patterns; amplitude: ± 20 mm; cycle: 3 s) in the pan and tilt directions at isocenter plane. Two continuous images were then analyzed for each feature point using the pyramidal Lucas-Kanade (LK) method, which is an optical flow estimation algorithm. We used a tapped hole as a feature point of the gimbaled head. The period and amplitude were analyzed to acquire a quantitative gimbaled head swing value for daily QA. The mean ± SD of the period were 3.00 ± 0.03 (range: 3.00-3.07) s and 3.00 ± 0.02 (range: 3.00-3.07) s in the pan and tilt directions, respectively. The mean ± SD of the relative displacement were 19.7 ± 0.08 (range: 19.6-19.8) mm and 18.9 ± 0.2 (range: 18.4-19.5) mm in the pan and tilt directions, respectively. The gimbaled head swing was reliable for DTT. We propose a quantitative gimbaled head swing verification method for daily QA using the feature point tracking method and a web camera. Our method can quantitatively assess the gimbaled head swing for daily QA from baseline values, measured at the time of acceptance and commissioning. © 2016 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.
2006-01-01
We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media, Inc.
Ground-based measurements of ionospheric dynamics
NASA Astrophysics Data System (ADS)
Kouba, Daniel; Chum, Jaroslav
2018-05-01
Different methods are used to research and monitor the ionospheric dynamics using ground measurements: Digisonde Drift Measurements (DDM) and Continuous Doppler Sounding (CDS). For the first time, we present comparison between both methods on specific examples. Both methods provide information about the vertical drift velocity component. The DDM provides more information about the drift velocity vector and detected reflection points. However, the method is limited by the relatively low time resolution. In contrast, the strength of CDS is its high time resolution. The discussed methods can be used for real-time monitoring of medium scale travelling ionospheric disturbances. We conclude that it is advantageous to use both methods simultaneously if possible. The CDS is then applied for the disturbance detection and analysis, and the DDM is applied for the reflection height control.
ACCESS 3. Approximation concepts code for efficient structural synthesis: User's guide
NASA Technical Reports Server (NTRS)
Fleury, C.; Schmit, L. A., Jr.
1980-01-01
A user's guide is presented for ACCESS-3, a research oriented program which combines dual methods and a collection of approximation concepts to achieve excellent efficiency in structural synthesis. The finite element method is used for structural analysis and dual algorithms of mathematical programming are applied in the design optimization procedure. This program retains all of the ACCESS-2 capabilities and the data preparation formats are fully compatible. Four distinct optimizer options were added: interior point penalty function method (NEWSUMT); second order primal projection method (PRIMAL2); second order Newton-type dual method (DUAL2); and first order gradient projection-type dual method (DUAL1). A pure discrete and mixed continuous-discrete design variable capability, and zero order approximation of the stress constraints are also included.
Kirchhoff and Ohm in action: solving electric currents in continuous extended media
NASA Astrophysics Data System (ADS)
Dolinko, A. E.
2018-03-01
In this paper we show a simple and versatile computational simulation method for determining electric currents and electric potential in 2D and 3D media with arbitrary distribution of resistivity. One of the highlights of the proposed method is that the simulation space containing the distribution of resistivity and the points of external applied voltage are introduced by means of digital images or bitmaps, which easily allows simulating any phenomena involving distributions of resistivity. The simulation is based on the Kirchhoff’s laws of electric currents and it is solved by means of an iterative procedure. The method is also generalised to account for media with distributions of reactive impedance. At the end of this work, we show an example of application of the simulation, consisting in reproducing the response obtained with the geophysical method of electric resistivity tomography in presence of soil cracks. This paper is aimed at undergraduate or graduated students interested in computational physics and electricity and also researchers involved in the area of continuous electric media, which could find a simple and powerful tool for investigation.
Thomas, Felicity; Signal, Matthew; Chase, J. Geoffrey
2015-01-01
Patients admitted to critical care often experience dysglycemia and high levels of insulin resistance, various intensive insulin therapy protocols and methods have attempted to safely normalize blood glucose (BG) levels. Continuous glucose monitoring (CGM) devices allow glycemic dynamics to be captured much more frequently (every 2-5 minutes) than traditional measures of blood glucose and have begun to be used in critical care patients and neonates to help monitor dysglycemia. In an attempt to obtain a better insight relating biomedical signals and patient status, some researchers have turned toward advanced time series analysis methods. In particular, Detrended Fluctuation Analysis (DFA) has been a topic of many recent studies in to glycemic dynamics. DFA investigates the “complexity” of a signal, how one point in time changes relative to its neighboring points, and DFA has been applied to signals like the inter-beat-interval of human heartbeat to differentiate healthy and pathological conditions. Analyzing the glucose metabolic system with such signal processing tools as DFA has been enabled by the emergence of high quality CGM devices. However, there are several inconsistencies within the published work applying DFA to CGM signals. Therefore, this article presents a review and a “how-to” tutorial of DFA, and in particular its application to CGM signals to ensure the methods used to determine complexity are used correctly and so that any relationship between complexity and patient outcome is robust. PMID:26134835
Expectation values of twist fields and universal entanglement saturation of the free massive boson
NASA Astrophysics Data System (ADS)
Blondeau-Fournier, Olivier; Doyon, Benjamin
2017-07-01
The evaluation of vacuum expectation values (VEVs) in massive integrable quantum field theory (QFT) is a nontrivial renormalization-group ‘connection problem’—relating large and short distance asymptotics—and is in general unsolved. This is particularly relevant in the context of entanglement entropy, where VEVs of branch-point twist fields give universal saturation predictions. We propose a new method to compute VEVs of twist fields associated to continuous symmetries in QFT. The method is based on a differential equation in the continuous symmetry parameter, and gives VEVs as infinite form-factor series which truncate at two-particle level in free QFT. We verify the method by studying U(1) twist fields in free models, which are simply related to the branch-point twist fields. We provide the first exact formulae for the VEVs of such fields in the massive uncompactified free boson model, checking against an independent calculation based on angular quantization. We show that logarithmic terms, overlooked in the original work of Callan and Wilczek (1994 Phys. Lett. B 333 55-61), appear both in the massless and in the massive situations. This implies that, in agreement with numerical form-factor observations by Bianchini and Castro-Alvaredo (2016 Nucl. Phys. B 913 879-911), the standard power-law short-distance behavior is corrected by a logarithmic factor. We discuss how this gives universal formulae for the saturation of entanglement entropy of a single interval in near-critical harmonic chains, including loglog corrections.
Characterization of extended channel bioreactors for continuous-flow protein production
Timm, Andrea C.; Shankles, Peter G.; Foster, Carmen M.; ...
2015-10-02
In this paper, protein based therapeutics are an important class of drugs, used to treat a variety of medical conditions including cancer and autoimmune diseases. Requiring continuous cold storage, and having a limited shelf life, the ability to produce such therapeutics at the point-of-care would open up new opportunities in distributing medicines and treating patients in more remote locations. Here, the authors describe the first steps in the development of a microfluidic platform that can be used for point-of-care protein synthesis. While biologic medicines, including therapeutic proteins, are commonly produced using recombinant deoxyribonucleic acid (DNA) technology in large batch cellmore » cultures, the system developed here utilizes cell-free protein synthesis (CFPS) technology. CFPS is a scalable technology that uses cell extracts containing the biological machinery required for transcription and translation and combines those extracts with DNA, encoding a specific gene, and the additional metabolites required to produce proteins in vitro. While CFPS reactions are typically performed in batch or fed-batch reactions, a well-engineered reaction scheme may improve both the rate of protein production and the economic efficiency of protein synthesis reactions, as well as enable a more streamlined method for subsequent purification of the protein product—all necessary requirements for point-of-care protein synthesis. In this work, the authors describe a new bioreactor design capable of continuous production of protein using cell-free protein synthesis. The bioreactors were designed with three inlets to separate reactive components prior to on-chip mixing, which lead into a long, narrow, serpentine channel. These multiscale, serpentine channel bioreactors were designed to take advantage of microscale diffusion distances across narrow channels in reactors containing enough volume to produce a therapeutic dose of protein, and open the possibility of performing these reactions continuously and in line with downstream purification modules. Here, the authors demonstrate the capability to produce protein over time with continuous-flow reactions and examine basic design features and operation specifications fundamental to continuous microfluidic protein synthesis.« less
Low-power laser use in the treatment of alopecia and crural ulcers
NASA Astrophysics Data System (ADS)
Ciuchita, Tavi; Usurelu, Mircea; Antipa, Ciprian; Vlaiculescu, Mihaela; Ionescu, Elena
1998-07-01
The authors tried to verify the efficacy of Low Power Laser (LPL) in scalp alopecia and crural ulcers of different causes. Laser used was (red diode, continuous emission, 8 mW power, wave length 670 nm spot size about 5 mm diameter on some points 1 - 2 minutes per point. We also use as control classical therapy. Before, during and after treatment, histological samples were done for alopecia. For laser groups (alopecia and ulcers) the results were rather superior and in a three or twice time shorter than control group. We conclude that LPL therapy is a very useful complementary method for the treatment of scalp alopecia and crural ulcers.
NASA Technical Reports Server (NTRS)
Wingrove, Rodney C.; Coate, Robert E.
1961-01-01
The guidance system for maneuvering vehicles within a planetary atmosphere which was studied uses the concept of fast continuous prediction of the maximum maneuver capability from existing conditions rather than a stored-trajectory technique. used, desired touchdown points are compared with the maximum range capability and heating or acceleration limits, so that a proper decision and choice of control inputs can be made by the pilot. In the method of display and control a piloted fixed simulator was used t o demonstrate the feasibility od the concept and to study its application to control of lunar mission reentries and recoveries from aborts.
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
Optimization of bump and blowing to control the flow through a transonic compressor blade cascade
NASA Astrophysics Data System (ADS)
Mazaheri, K.; Khatibirad, S.
2018-03-01
Shock control bump (SCB) and blowing are two flow control methods, used here to improve the aerodynamic performance of transonic compressors. Both methods are applied to a NASA rotor 67 blade section and are optimized to minimize the total pressure loss. A continuous adjoint algorithm is used for multi-point optimization of a SCB to improve the aerodynamic performance of the rotor blade section, for a range of operational conditions around its design point. A multi-point and two single-point optimizations are performed in the design and off-design conditions. It is shown that the single-point optimized shapes have the best performance for their respective operating conditions, but the multi-point one has an overall better performance over the whole operating range. An analysis is given regarding how similarly both single- and multi-point optimized SCBs change the wave structure between blade sections resulting in a more favorable flow pattern. Interactions of the SCB with the boundary layer and the wave structure, and its effects on the separation regions are also studied. We have also introduced the concept of blowing for control of shock wave and boundary-layer interaction. A geometrical model is introduced, and the geometrical and physical parameters of blowing are optimized at the design point. The performance improvements of blowing are compared with the SCB. The physical interactions of SCB with the boundary layer and the shock wave are analyzed. The effects of SCB on the wave structure in the flow domain outside the boundary-layer region are investigated. It is shown that the effects of the blowing mechanism are very similar to the SCB.
Kish, G.R.; Stringer, C.E.; Stewart, M.T.; Rains, M.C.; Torres, A.E.
2010-01-01
Geochemical mass-balance (GMB) and conductivity mass-balance (CMB) methods for hydrograph separation were used to determine the contribution of base flow to total stormflow at two sites in the upper Hillsborough River watershed in west-central Florida from 2003-2005 and at one site in 2009. The chemical and isotopic composition of streamflow and precipitation was measured during selected local and frontal low- and high-intensity storm events and compared to the geochemical and isotopic composition of groundwater. Input for the GMB method included cation, anion, and stable isotope concentrations of surface water and groundwater, whereas input for the CMB method included continuous or point-sample measurement of specific conductance. The surface water is a calcium-bicarbonate type water, which closely resembles groundwater geochemically, indicating that much of the surface water in the upper Hillsborough River basin is derived from local groundwater discharge. This discharge into the Hillsborough River at State Road 39 and at Hillsborough River State Park becomes diluted by precipitation and runoff during the wet season, but retains the calcium-bicarbonate characteristics of Upper Floridan aquifer water. Field conditions limited the application of the GMB method to low-intensity storms but the CMB method was applied to both low-intensity and high-intensity storms. The average contribution of base flow to total discharge for all storms ranged from 31 to 100 percent, whereas the contribution of base flow to total discharge during peak discharge periods ranged from less than 10 percent to 100 percent. Although calcium, magnesium, and silica were consistent markers of Upper Floridan aquifer chemistry, their use in calculating base flow by the GMB method was limited because the frequency of point data collected in this study was not sufficient to capture the complete hydrograph from pre-event base-flow to post-event base-flow concentrations. In this study, pre-event water represented somewhat diluted groundwater. Streamflow conductivity integrates the concentrations of the major ions, and the logistics of acquiring specific conductance at frequent time intervals are less complicated than data collection, sample processing, shipment, and analysis of water samples in a laboratory. The acquisition of continuous specific conductance data reduces uncertainty associated with less-frequently collected geochemical point data.
Surface fitting three-dimensional bodies
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1974-01-01
The geometry of general three-dimensional bodies is generated from coordinates of points in several cross sections. Since these points may not be smooth, they are divided into segments and general conic sections are curve fit in a least-squares sense to each segment of a cross section. The conic sections are then blended in the longitudinal direction by fitting parametric cubic-spline curves through coordinate points which define the conic sections in the cross-sectional planes. Both the cross-sectional and longitudinal curves may be modified by specifying particular segments as straight lines and slopes at selected points. Slopes may be continuous or discontinuous and finite or infinite. After a satisfactory surface fit has been obtained, cards may be punched with the data necessary to form a geometry subroutine package for use in other computer programs. At any position on the body, coordinates, slopes and second partial derivatives are calculated. The method is applied to a blunted 70 deg delta wing, and it was found to generate the geometry very well.
Railway Tunnel Clearance Inspection Method Based on 3D Point Cloud from Mobile Laser Scanning
Zhou, Yuhui; Wang, Shaohua; Mei, Xi; Yin, Wangling; Lin, Chunfeng; Mao, Qingzhou
2017-01-01
Railway tunnel clearance is directly related to the safe operation of trains and upgrading of freight capacity. As more and more railway are put into operation and the operation is continuously becoming faster, the railway tunnel clearance inspection should be more precise and efficient. In view of the problems existing in traditional tunnel clearance inspection methods, such as low density, slow speed and a lot of manual operations, this paper proposes a tunnel clearance inspection approach based on 3D point clouds obtained by a mobile laser scanning system (MLS). First, a dynamic coordinate system for railway tunnel clearance inspection has been proposed. A rail line extraction algorithm based on 3D linear fitting is implemented from the segmented point cloud to establish a dynamic clearance coordinate system. Second, a method to seamlessly connect all rail segments based on the railway clearance restrictions, and a seamless rail alignment is formed sequentially from the middle tunnel section to both ends. Finally, based on the rail alignment and the track clearance coordinate system, different types of clearance frames are introduced for intrusion operation with the tunnel section to realize the tunnel clearance inspection. By taking the Shuanghekou Tunnel of the Chengdu–Kunming Railway as an example, when the clearance inspection is carried out by the method mentioned herein, its precision can reach 0.03 m, and difference types of clearances can be effectively calculated. This method has a wide application prospects. PMID:28880232
Image smoothing and enhancement via min/max curvature flow
NASA Astrophysics Data System (ADS)
Malladi, Ravikanth; Sethian, James A.
1996-03-01
We present a class of PDE-based algorithms suitable for a wide range of image processing applications. The techniques are applicable to both salt-and-pepper gray-scale noise and full- image continuous noise present in black and white images, gray-scale images, texture images and color images. At the core, the techniques rely on a level set formulation of evolving curves and surfaces and the viscosity in profile evolution. Essentially, the method consists of moving the isointensity contours in an image under curvature dependent speed laws to achieve enhancement. Compared to existing techniques, our approach has several distinct advantages. First, it contains only one enhancement parameter, which in most cases is automatically chosen. Second, the scheme automatically stops smoothing at some optimal point; continued application of the scheme produces no further change. Third, the method is one of the fastest possible schemes based on a curvature-controlled approach.
Rogers, Thomas A.
2011-01-01
The thermal deactivation of TEM-1 β-lactamase was examined using two experimental techniques: a series of isothermal batch assays and a single, continuous, non-isothermal assay in an enzyme membrane reactor (EMR). The isothermal batch-mode technique was coupled with the three-state “Equilibrium Model” of enzyme deactivation, while the results of the EMR experiment were fitted to a four-state “molten globule model”. The two methods both led to the conclusions that the thermal deactivation of TEM-1 β-lactamase does not follow the Lumry-Eyring model and that the Teq of the enzyme (the point at which active and inactive states are present in equal amounts due to thermodynamic equilibrium) is at least 10 °C from the Tm (melting temperature), contrary to the idea that the true temperature optimum of a biocatalyst is necessarily close to the melting temperature. PMID:22039393
NASA Astrophysics Data System (ADS)
Vattré, A.; Devincre, B.; Feyel, F.; Gatti, R.; Groh, S.; Jamond, O.; Roos, A.
2014-02-01
A unified model coupling 3D dislocation dynamics (DD) simulations with the finite element (FE) method is revisited. The so-called Discrete-Continuous Model (DCM) aims to predict plastic flow at the (sub-)micron length scale of materials with complex boundary conditions. The evolution of the dislocation microstructure and the short-range dislocation-dislocation interactions are calculated with a DD code. The long-range mechanical fields due to the dislocations are calculated by a FE code, taking into account the boundary conditions. The coupling procedure is based on eigenstrain theory, and the precise manner in which the plastic slip, i.e. the dislocation glide as calculated by the DD code, is transferred to the integration points of the FE mesh is described in full detail. Several test cases are presented, and the DCM is applied to plastic flow in a single-crystal Nickel-based superalloy.
Second ventilatory threshold from heart-rate variability: valid when the upper body is involved?
Mourot, Laurent; Fabre, Nicolas; Savoldelli, Aldo; Schena, Federico
2014-07-01
To determine the most accurate method based on spectral analysis of heart-rate variability (SA-HRV) during an incremental and continuous maximal test involving the upper body, the authors tested 4 different methods to obtain the heart rate (HR) at the second ventilatory threshold (VT(2)). Sixteen ski mountaineers (mean ± SD; age 25 ± 3 y, height 177 ± 8 cm, mass 69 ± 10 kg) performed a roller-ski test on a treadmill. Respiratory variables and HR were continuously recorded, and the 4 SA-HRV methods were compared with the gas-exchange method through Bland and Altman analyses. The best method was the one based on a time-varying spectral analysis with high frequency ranging from 0.15 Hz to a cutoff point relative to the individual's respiratory sinus arrhythmia. The HR values were significantly correlated (r(2) = .903), with a mean HR difference with the respiratory method of 0.1 ± 3.0 beats/min and low limits of agreements (around -6 /+6 beats/min). The 3 other methods led to larger errors and lower agreements (up to 5 beats/min and around -23/+20 beats/min). It is possible to accurately determine VT(2) with an HR monitor during an incremental test involving the upper body if the appropriate HRV method is used.
34 CFR 647.21 - What selection criteria does the Secretary use?
Code of Federal Regulations, 2014 CFR
2014-07-01
... POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION RONALD E. MCNAIR POSTBACCALAUREATE ACHIEVEMENT PROGRAM How Does... from completing baccalaureate programs and continuing to postbaccalaureate programs; and demonstrates... program. (3) (2 points) Continued enrollment in graduate study. (4) (2 points) Doctoral degree attainment...
34 CFR 647.21 - What selection criteria does the Secretary use?
Code of Federal Regulations, 2013 CFR
2013-07-01
... POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION RONALD E. MCNAIR POSTBACCALAUREATE ACHIEVEMENT PROGRAM How Does... from completing baccalaureate programs and continuing to postbaccalaureate programs; and demonstrates... program. (3) (2 points) Continued enrollment in graduate study. (4) (2 points) Doctoral degree attainment...
32 CFR 636.20 - Point system procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 4 2010-07-01 2010-07-01 true Point system procedures. 636.20 Section 636.20 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) LAW ENFORCEMENT AND CRIMINAL INVESTIGATIONS MOTOR VEHICLE TRAFFIC SUPERVISION (SPECIFIC INSTALLATIONS) Fort Stewart, Georgia...
An application of the Braunbeck method to the Maggi-Rubinowicz field representation
NASA Technical Reports Server (NTRS)
Meneghini, R.
1982-01-01
The Braunbek method is applied to the generalized vector potential associated with the Maggi-rubinowicz representation. Under certain approximations, an asymptotic evaluation of the vector potential is obtained. For observation points away from caustics or shadow boundaries, the field derived from this quantity is the same as that determined from the geometrical theory of diffraction on a singly diffracted edge ray. An evaluation of the field for the simple case of a plane wave normally incident on a circular aperture is presented showing that the field predicted by the Maggi-Rubinowicz theory is continuous across the shadow boundary.
An application of the Braunbeck method to the Maggi-Rubinowicz field representation
NASA Astrophysics Data System (ADS)
Meneghini, R.
1982-06-01
The Braunbek method is applied to the generalized vector potential associated with the Maggi-rubinowicz representation. Under certain approximations, an asymptotic evaluation of the vector potential is obtained. For observation points away from caustics or shadow boundaries, the field derived from this quantity is the same as that determined from the geometrical theory of diffraction on a singly diffracted edge ray. An evaluation of the field for the simple case of a plane wave normally incident on a circular aperture is presented showing that the field predicted by the Maggi-Rubinowicz theory is continuous across the shadow boundary.
Computer numeric control generation of toric surfaces
NASA Astrophysics Data System (ADS)
Bradley, Norman D.; Ball, Gary A.; Keller, John R.
1994-05-01
Until recently, the manufacture of toric ophthalmic lenses relied largely upon expensive, manual techniques for generation and polishing. Recent gains in computer numeric control (CNC) technology and tooling enable lens designers to employ single- point diamond, fly-cutting methods in the production of torics. Fly-cutting methods continue to improve, significantly expanding lens design possibilities while lowering production costs. Advantages of CNC fly cutting include precise control of surface geometry, rapid production with high throughput, and high-quality lens surface finishes requiring minimal polishing. As accessibility and affordability increase within the ophthalmic market, torics promise to dramatically expand lens design choices available to consumers.
Some New Approaches to Multivariate Probability Distributions.
1986-12-01
Krishnaiah (1977). The following example may serve as an illustration of this point. EXAMPLE 2. (Fre^*chet’s bivariate continuous distribution...the error in the theorem of "" Prakasa Rao (1974) and to Dr. P.R. Krishnaiah for his valuable comments on the initial draft, his monumental patience and...M. and Proschan, F. (1984). Nonparametric Concepts and Methods in Reliability, Handbook of Statistics, 4, 613-655, (eds. P.R. Krishnaiah and P.K
NASA Astrophysics Data System (ADS)
Curtright, Thomas
2011-04-01
Continuous interpolates are described for classical dynamical systems defined by discrete time-steps. Functional conjugation methods play a central role in obtaining the interpolations. The interpolates correspond to particle motion in an underlying potential, V. Typically, V has no lower bound and can exhibit switchbacks wherein V changes form when turning points are encountered by the particle. The Beverton-Holt and Skellam models of population dynamics, and particular cases of the logistic map are used to illustrate these features.
Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)
NASA Astrophysics Data System (ADS)
Li, X. R.; Wang, X.
2016-03-01
When using the genetic algorithm to solve the problem of too-short-arc (TSA) determination, due to the difference of computing processes between the genetic algorithm and classical method, the methods for outliers editing are no longer applicable. In the genetic algorithm, the robust estimation is acquired by means of using different loss functions in the fitness function, then the outlier problem of TSAs is solved. Compared with the classical method, the application of loss functions in the genetic algorithm is greatly simplified. Through the comparison of results of different loss functions, it is clear that the methods of least median square and least trimmed square can greatly improve the robustness of TSAs, and have a high breakdown point.
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations. Private...
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations. Private...
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations. Private...
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations. Private...
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations. Private...
Continuous Firefly Algorithm for Optimal Tuning of Pid Controller in Avr System
NASA Astrophysics Data System (ADS)
Bendjeghaba, Omar
2014-01-01
This paper presents a tuning approach based on Continuous firefly algorithm (CFA) to obtain the proportional-integral- derivative (PID) controller parameters in Automatic Voltage Regulator system (AVR). In the tuning processes the CFA is iterated to reach the optimal or the near optimal of PID controller parameters when the main goal is to improve the AVR step response characteristics. Conducted simulations show the effectiveness and the efficiency of the proposed approach. Furthermore the proposed approach can improve the dynamic of the AVR system. Compared with particle swarm optimization (PSO), the new CFA tuning method has better control system performance in terms of time domain specifications and set-point tracking.
NASA Astrophysics Data System (ADS)
Marcozzi, Michael D.
2008-12-01
We consider theoretical and approximation aspects of the stochastic optimal control of ultradiffusion processes in the context of a prototype model for the selling price of a European call option. Within a continuous-time framework, the dynamic management of a portfolio of assets is effected through continuous or point control, activation costs, and phase delay. The performance index is derived from the unique weak variational solution to the ultraparabolic Hamilton-Jacobi equation; the value function is the optimal realization of the performance index relative to all feasible portfolios. An approximation procedure based upon a temporal box scheme/finite element method is analyzed; numerical examples are presented in order to demonstrate the viability of the approach.
40 CFR 141.22 - Turbidity sampling and analytical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...
40 CFR 141.22 - Turbidity sampling and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...
40 CFR 141.22 - Turbidity sampling and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...
40 CFR 141.22 - Turbidity sampling and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...
40 CFR 141.22 - Turbidity sampling and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... requirements. 141.22 Section 141.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements... suppliers of water for both community and non-community water systems at a representative entry point(s) to...
Steele, James; Raubold, Kristin; Kemmler, Wolfgang; Fisher, James; Gentil, Paulo; Giessing, Jürgen
2017-01-01
The present study examined the progressive implementation of a high effort resistance training (RT) approach in older adults over 6 months and through a 6-month follow-up on strength, body composition, function, and wellbeing of older adults. Twenty-three older adults (aged 61 to 80 years) completed a 6-month supervised RT intervention applying progressive introduction of higher effort set end points. After completion of the intervention participants could choose to continue performing RT unsupervised until 6-month follow-up. Strength, body composition, function, and wellbeing all significantly improved over the intervention. Over the follow-up, body composition changes reverted to baseline values, strength was reduced though it remained significantly higher than baseline, and wellbeing outcomes were mostly maintained. Comparisons over the follow-up between those who did and those who did not continue with RT revealed no significant differences for changes in any outcome measure. Supervised RT employing progressive application of high effort set end points is well tolerated and effective in improving strength, body composition, function, and wellbeing in older adults. However, whether participants continued, or did not, with RT unsupervised at follow-up had no effect on outcomes perhaps due to reduced effort employed during unsupervised RT.
NASA Astrophysics Data System (ADS)
Li, Weiyao; Huang, Guanhua; Xiong, Yunwu
2016-04-01
The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and solute transport complexity weakened, and the corresponding information entropy also decreased. Longitudinal macro dispersivity declined slightly at early time then rose. Solute spatial and temporal distribution had significant impacts on the information entropy. Information entropy could reflect the change of solute distribution. Information entropy appears a tool to characterize the spatial and temporal complexity of solute migration and provides a reference for future research.
Adaptive enhanced sampling by force-biasing using neural networks
NASA Astrophysics Data System (ADS)
Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.
2018-04-01
A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.
NASA Astrophysics Data System (ADS)
Singh, Arvind; Singh, Upendra Kumar
2017-02-01
This paper deals with the application of continuous wavelet transform (CWT) and Euler deconvolution methods to estimate the source depth using magnetic anomalies. These methods are utilized mainly to focus on the fundamental issue of mapping the major coal seam and locating tectonic lineaments. The main aim of the study is to locate and characterize the source of the magnetic field by transferring the data into an auxiliary space by CWT. The method has been tested on several synthetic source anomalies and finally applied to magnetic field data from Jharia coalfield, India. Using magnetic field data, the mean depth of causative sources points out the different lithospheric depth over the study region. Also, it is inferred that there are two faults, namely the northern boundary fault and the southern boundary fault, which have an orientation in the northeastern and southeastern direction respectively. Moreover, the central part of the region is more faulted and folded than the other parts and has sediment thickness of about 2.4 km. The methods give mean depth of the causative sources without any a priori information, which can be used as an initial model in any inversion algorithm.
Rigge, Matthew B.; Gass, Leila; Homer, Collin G.; Xian, George Z.
2017-10-26
The National Land Cover Database (NLCD) provides thematic land cover and land cover change data at 30-meter spatial resolution for the United States. Although the NLCD is considered to be the leading thematic land cover/land use product and overall classification accuracy across the NLCD is high, performance and consistency in the vast shrub and grasslands of the Western United States is lower than desired. To address these issues and fulfill the needs of stakeholders requiring more accurate rangeland data, the USGS has developed a method to quantify these areas in terms of the continuous cover of several cover components. These components include the cover of shrub, sagebrush (Artemisia spp), big sagebrush (Artemisia tridentata spp.), herbaceous, annual herbaceous, litter, and bare ground, and shrub and sagebrush height. To produce maps of component cover, we collected field data that were then associated with spectral values in WorldView-2 and Landsat imagery using regression tree models. The current report outlines the procedures and results of converting these continuous cover components to three thematic NLCD classes: barren, shrubland, and grassland. To accomplish this, we developed a series of indices and conditional models using continuous cover of shrub, bare ground, herbaceous, and litter as inputs. The continuous cover data are currently available for two large regions in the Western United States. Accuracy of the “cross-walked” product was assessed relative to that of NLCD 2011 at independent validation points (n=787) across these two regions. Overall thematic accuracy of the “cross-walked” product was 0.70, compared to 0.63 for NLCD 2011. The kappa value was considerably higher for the “cross-walked” product at 0.41 compared to 0.28 for NLCD 2011. Accuracy was also evaluated relative to the values of training points (n=75,000) used in the development of the continuous cover components. Again, the “cross-walked” product outperformed NLCD 2011, with an overall accuracy of 0.81, compared to 0.66 for NLCD 2011. These results demonstrated that our continuous cover predictions and models were successful in increasing thematic classification accuracy in Western United States shrublands. We plan to directly use the “cross-walked” product, where available, in the NLCD 2016 product.
NASA Astrophysics Data System (ADS)
Pan, Hao; Qu, Xinghua; Shi, Chunzhao; Zhang, Fumin; Li, Yating
2018-06-01
The non-uniform interval resampling method has been widely used in frequency modulated continuous wave (FMCW) laser ranging. In the large-bandwidth and long-distance measurements, the range peak is deteriorated due to the fiber dispersion mismatch. In this study, we analyze the frequency-sampling error caused by the mismatch and measure it using the spectroscopy of molecular frequency references line. By using the adjacent points' replacement and spline interpolation technique, the sampling errors could be eliminated. The results demonstrated that proposed method is suitable for resolution-enhancement and high-precision measurement. Moreover, using the proposed method, we achieved the precision of absolute distance less than 45 μm within 8 m.
A simple analytical aerodynamic model of Langley Winged-Cone Aerospace Plane concept
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.
1994-01-01
A simple three DOF analytical aerodynamic model of the Langley Winged-Coned Aerospace Plane concept is presented in a form suitable for simulation, trajectory optimization, and guidance and control studies. The analytical model is especially suitable for methods based on variational calculus. Analytical expressions are presented for lift, drag, and pitching moment coefficients from subsonic to hypersonic Mach numbers and angles of attack up to +/- 20 deg. This analytical model has break points at Mach numbers of 1.0, 1.4, 4.0, and 6.0. Across these Mach number break points, the lift, drag, and pitching moment coefficients are made continuous but their derivatives are not. There are no break points in angle of attack. The effect of control surface deflection is not considered. The present analytical model compares well with the APAS calculations and wind tunnel test data for most angles of attack and Mach numbers.
Rigorous high-precision enclosures of fixed points and their invariant manifolds
NASA Astrophysics Data System (ADS)
Wittig, Alexander N.
The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.
Fourier transform methods in local gravity modeling
NASA Technical Reports Server (NTRS)
Harrison, J. C.; Dickinson, M.
1989-01-01
New algorithms were derived for computing terrain corrections, all components of the attraction of the topography at the topographic surface and the gradients of these attractions. These algoriithms utilize fast Fourier transforms, but, in contrast to methods currently in use, all divergences of the integrals are removed during the analysis. Sequential methods employing a smooth intermediate reference surface were developed to avoid the very large transforms necessary when making computations at high resolution over a wide area. A new method for the numerical solution of Molodensky's problem was developed to mitigate the convergence difficulties that occur at short wavelengths with methods based on a Taylor series expansion. A trial field on a level surface is continued analytically to the topographic surface, and compared with that predicted from gravity observations. The difference is used to compute a correction to the trial field and the process iterated. Special techniques are employed to speed convergence and prevent oscillations. Three different spectral methods for fitting a point-mass set to a gravity field given on a regular grid at constant elevation are described. Two of the methods differ in the way that the spectrum of the point-mass set, which extends to infinite wave number, is matched to that of the gravity field which is band-limited. The third method is essentially a space-domain technique in which Fourier methods are used to solve a set of simultaneous equations.
Spacecraft formation keeping near the libration points of the Sun-Earth/Moon system
NASA Astrophysics Data System (ADS)
Marchand, Belinda G.
Multi-spacecraft formations, evolving near the vicinity of the libration points of the Sun-Earth/Moon system, have drawn increased interest for a variety of applications. This is particularly true for space based interferometry missions such as Terrestrial Planet Finder (TPF) and the Micro Arcsecond X-Ray Imaging Mission (MAXIM). Recent studies in formation flight have focused, primarily, on the control of formations that evolve in the immediate vicinity of the Earth. However, the unique dynamical structure near the libration points requires that the effectiveness and feasibility of these methods be re-examined. The present study is divided into two main topics. First, a dynamical systems approach is employed to develop a better understanding of the natural uncontrolled formation dynamics in this region of space. The focus is formations that evolve near halo orbits and Lissajous trajectories, near the L1 and L2 libration points of the Sun-Earth/Moon system. This leads to the development of a Floquet controller designed to simplify the process of identifying naturally existing formations as well as the associated stable manifolds for deployment. The initial analysis is presented in the Circular Restricted Three-Body Problem, but the results are later transitioned into the more complete Ephemeris model. The next subject of interest in this investigation is non-natural formations. That is, formations that are not consistent with the natural dynamical flow near the libration points. Mathematically, precise formation keeping of a given nominal configuration requires continuous control. Hence, a detailed analysis is presented to contrast the effectiveness and issues associated with linear optimal control and feedback linearization methods. Of course, continuous operation of the thrusters, may not represent a feasible option for a particular mission. If discrete formation keeping is implemented, however, the formation keeping goal will be subject to increased tracking errors relative to the nominal path. With this in mind, the final phase of the analysis presented here is centered on discrete formation keeping. The initial analysis is devoted to both linear state and radial targeters. The results from these two methodologies are later employed as a starting solution for an optimal impulsive control algorithm.
C-5M Super Galaxy Utilization with Joint Precision Airdrop System
2012-03-22
System Notes FireFly 900-2,200 Steerable Parafoil Screamer 500-2,200 Steerable Parafoil w/additional chutes to slow touchdown Dragonfly...setting . This initial feasible solution provides the Nonlinear Program algorithm a starting point to continue its calculations. The model continues...provides the NLP with a starting point of 1. This provides the NLP algorithm a point within the feasible region to begin its calculations in an attempt
Methods for accurate cold-chain temperature monitoring using digital data-logger thermometers
NASA Astrophysics Data System (ADS)
Chojnacky, M. J.; Miller, W. M.; Strouse, G. F.
2013-09-01
Complete and accurate records of vaccine temperature history are vital to preserving drug potency and patient safety. However, previously published vaccine storage and handling guidelines have failed to indicate a need for continuous temperature monitoring in vaccine storage refrigerators. We evaluated the performance of seven digital data logger models as candidates for continuous temperature monitoring of refrigerated vaccines, based on the following criteria: out-of-box performance and compliance with manufacturer accuracy specifications over the range of use; measurement stability over extended, continuous use; proper setup in a vaccine storage refrigerator so that measurements reflect liquid vaccine temperatures; and practical methods for end-user validation and establishing metrological traceability. Data loggers were tested using ice melting point checks and by comparison to calibrated thermocouples to characterize performance over 0 °C to 10 °C. We also monitored logger performance in a study designed to replicate the range of vaccine storage and environmental conditions encountered at provider offices. Based on the results of this study, the Centers for Disease Control released new guidelines on proper methods for storage, handling, and temperature monitoring of vaccines for participants in its federally-funded Vaccines for Children Program. Improved temperature monitoring practices will ultimately decrease waste from damaged vaccines, improve consumer confidence, and increase effective inoculation rates.
40 CFR 141.100 - Criteria and procedures for public water systems using point-of-entry devices.
Code of Federal Regulations, 2010 CFR
2010-07-01
... water systems using point-of-entry devices. 141.100 Section 141.100 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER... meet all national primary drinking water regulations and would be of acceptable quality similar to...
40 CFR 279.32 - Used oil aggregation points owned by the generator.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 28 2013-07-01 2013-07-01 false Used oil aggregation points owned by the generator. 279.32 Section 279.32 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil...
40 CFR 279.32 - Used oil aggregation points owned by the generator.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 26 2010-07-01 2010-07-01 false Used oil aggregation points owned by the generator. 279.32 Section 279.32 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil...
40 CFR 279.32 - Used oil aggregation points owned by the generator.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 27 2011-07-01 2011-07-01 false Used oil aggregation points owned by the generator. 279.32 Section 279.32 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil...
40 CFR 279.32 - Used oil aggregation points owned by the generator.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 27 2014-07-01 2014-07-01 false Used oil aggregation points owned by the generator. 279.32 Section 279.32 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil...
32 CFR 242.6 - Central point of contact.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 2 2013-07-01 2013-07-01 false Central point of contact. 242.6 Section 242.6 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) MISCELLANEOUS ADMISSION POLICIES AND PROCEDURES FOR THE SCHOOL OF MEDICINE, UNIFORMED SERVICES UNIVERSITY OF THE...
32 CFR 242.6 - Central point of contact.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 2 2010-07-01 2010-07-01 false Central point of contact. 242.6 Section 242.6 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) MISCELLANEOUS ADMISSION POLICIES AND PROCEDURES FOR THE SCHOOL OF MEDICINE, UNIFORMED SERVICES UNIVERSITY OF THE...
32 CFR 242.6 - Central point of contact.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 2 2014-07-01 2014-07-01 false Central point of contact. 242.6 Section 242.6 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) MISCELLANEOUS ADMISSION POLICIES AND PROCEDURES FOR THE SCHOOL OF MEDICINE, UNIFORMED SERVICES UNIVERSITY OF THE...
32 CFR 242.6 - Central point of contact.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 2 2012-07-01 2012-07-01 false Central point of contact. 242.6 Section 242.6 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) MISCELLANEOUS ADMISSION POLICIES AND PROCEDURES FOR THE SCHOOL OF MEDICINE, UNIFORMED SERVICES UNIVERSITY OF THE...
32 CFR 242.6 - Central point of contact.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 2 2011-07-01 2011-07-01 false Central point of contact. 242.6 Section 242.6 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) MISCELLANEOUS ADMISSION POLICIES AND PROCEDURES FOR THE SCHOOL OF MEDICINE, UNIFORMED SERVICES UNIVERSITY OF THE...
The Selection of Computed Tomography Scanning Schemes for Lengthy Symmetric Objects
NASA Astrophysics Data System (ADS)
Trinh, V. B.; Zhong, Y.; Osipov, S. P.
2017-04-01
. The article describes the basic computed tomography scan schemes for lengthy symmetric objects: continuous (discrete) rotation with a discrete linear movement; continuous (discrete) rotation with discrete linear movement to acquire 2D projection; continuous (discrete) linear movement with discrete rotation to acquire one-dimensional projection and continuous (discrete) rotation to acquire of 2D projection. The general method to calculate the scanning time is discussed in detail. It should be extracted the comparison principle to select a scanning scheme. This is because data are the same for all scanning schemes: the maximum energy of the X-ray radiation; the power of X-ray radiation source; the angle of the X-ray cone beam; the transverse dimension of a single detector; specified resolution and the maximum time, which is need to form one point of the original image and complies the number of registered photons). It demonstrates the possibilities of the above proposed method to compare the scanning schemes. Scanning object was a cylindrical object with the mass thickness is 4 g/cm2, the effective atomic number is 15 and length is 1300 mm. It analyzes data of scanning time and concludes about the efficiency of scanning schemes. It examines the productivity of all schemes and selects the effective one.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorenberg, Eric J., E-mail: eric.dorenberg@rikshospitalet.no; Jakobsen, Jarl A.; Brabrand, Knut
Purpose. To evaluate the feasibility of using contrast-enhanced ultrasound (CEUS) during uterine artery embolization (UAE) in order to define the correct end-point of embolization with complete devascularization of all fibroids. Methods. In this prospective study of 10 consecutive women undergoing UAE, CEUS was performed in the angiographic suite during embolization. When the angiographic end-point, defined as the 'pruned-tree' appearance of the uterine arteries was reached, CEUS was performed while the angiographic catheters to both uterine arteries were kept in place. The decision whether or not to continue the embolization was based on the findings at CEUS. The results of CEUSmore » were compared with those of contrast-enhanced magnetic resonance imaging (MRI) 1 day as well as 3 months following UAE. Results. CEUS was successfully performed in all women. In 4 cases injection of particles was continued based on the findings at CEUS despite angiographically complete embolization. CEUS imaging at completion of UAE correlated well with the findings at MRI. Conclusion. The use of CEUS during UAE is feasible and may increase the quality of UAE.« less
End-point detection in potentiometric titration by continuous wavelet transform.
Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W
2009-10-15
The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.
Four-State Continuous-Variable Quantum Key Distribution with Photon Subtraction
NASA Astrophysics Data System (ADS)
Li, Fei; Wang, Yijun; Liao, Qin; Guo, Ying
2018-06-01
Four-state continuous-variable quantum key distribution (CVQKD) is one of the discretely modulated CVQKD which generates four nonorthogonal coherent states and exploits the sign of the measured quadrature of each state to encode information rather than uses the quadrature \\hat {x} or \\hat {p} itself. It has been proven that four-state CVQKD is more suitable than Gaussian modulated CVQKD in terms of transmission distance. In this paper, we propose an improved four-state CVQKD using an non-Gaussian operation, photon subtraction. A suitable photon-subtraction operation can be exploited to improve the maximal transmission of CVQKD in point-to-point quantum communication since it provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that the proposed scheme can lengthen the maximum transmission distance. Furthermore, by taking finite-size effect into account we obtain a tighter bound of the secure distance, which is more practical than that obtained in the asymptotic limit.
Schrangl, Patrick; Reiterer, Florian; Heinemann, Lutz; Freckmann, Guido; Del Re, Luigi
2018-05-18
Systems for continuous glucose monitoring (CGM) are evolving quickly, and the data obtained are expected to become the basis for clinical decisions for many patients with diabetes in the near future. However, this requires that their analytical accuracy is sufficient. This accuracy is usually determined with clinical studies by comparing the data obtained by the given CGM system with blood glucose (BG) point measurements made with a so-called reference method. The latter is assumed to indicate the correct value of the target quantity. Unfortunately, due to the nature of the clinical trials and the approach used, such a comparison is subject to several effects which may lead to misleading results. While some reasons for the differences between the values obtained with CGM and BG point measurements are relatively well-known (e.g., measurement in different body compartments), others related to the clinical study protocols are less visible, but also quite important. In this review, we present a general picture of the topic as well as tools which allow to correct or at least to estimate the uncertainty of measures of CGM system performance.
[Pharmacovigilance of major parmaceutical innovation].
Xiang, Yongyang; Xie, Yanming; Yi, Danhui
2011-10-01
With the continuous improvement of international "pharmacovigilance" technology and methods,it becomes the key part of the post-marketing evaluation. This issue is based on this research background, and also means to find out the Chinese medicine safety monitor which consistents with the reality. A common problem is that those who choose a career in pharmacovigilance know how the complex data presented to us are a source of both fascination and frustration. In the 70's, for the first time data mining technology in the international pharmacovigilance turn up, we try to establish new signal detection method to make contributes to post-marketing evaluation of Chinese medicine and establishment of registration. Building the national adverse reaction reporting database is widely used in western country. Nature of the problem is that pharmacovigilance issues can come through a lot of assumptions into the statistical problems, different assumptions are for different statistical tests. Through the traditional imbalance between the proportion of fourfold table for other assumptions, few countries use in practice, this does not involve evidence, but this issue provides the introduce of the principle. Methods include the ratio of the report of the Netherlands (ROR), the proportion of reports than the UK ratio (PRR),WHO's information points (IC), the U.S. Food and Drug Administration empirical Bayes (EBS), etc. Because there is no international gold standard of the signal detection method, at first we use the simulation comparing these four methods of data mining, From the point of specificity, the sample size demand, this issue views the advantages and disadvantages of four methods and application conditions,and from a technical point of view and try to propose a new signal detection method, for example, Hierarchical Bayesian.
Robust fiber clustering of cerebral fiber bundles in white matter
NASA Astrophysics Data System (ADS)
Yao, Xufeng; Wang, Yongxiong; Zhuang, Songlin
2014-11-01
Diffusion tensor imaging fiber tracking (DTI-FT) has been widely accepted in the diagnosis and treatment of brain diseases. During the rendering pipeline of specific fiber tracts, the image noise and low resolution of DTI would lead to false propagations. In this paper, we propose a robust fiber clustering (FC) approach to diminish false fibers from one fiber tract. Our algorithm consists of three steps. Firstly, the optimized fiber assignment continuous tracking (FACT) is implemented to reconstruct one fiber tract; and then each curved fiber in the fiber tract is mapped to a point by kernel principal component analysis (KPCA); finally, the point clouds of fiber tract are clustered by hierarchical clustering which could distinguish false fibers from true fibers in one tract. In our experiment, the corticospinal tract (CST) in one case of human data in vivo was used to validate our method. Our method showed reliable capability in decreasing the false fibers in one tract. In conclusion, our method could effectively optimize the visualization of fiber bundles and would help a lot in the field of fiber evaluation.
The structure of mode-locking regions of piecewise-linear continuous maps: II. Skew sawtooth maps
NASA Astrophysics Data System (ADS)
Simpson, D. J. W.
2018-05-01
In two-parameter bifurcation diagrams of piecewise-linear continuous maps on , mode-locking regions typically have points of zero width known as shrinking points. Near any shrinking point, but outside the associated mode-locking region, a significant proportion of parameter space can be usefully partitioned into a two-dimensional array of annular sectors. The purpose of this paper is to show that in these sectors the dynamics is well-approximated by a three-parameter family of skew sawtooth circle maps, where the relationship between the skew sawtooth maps and the N-dimensional map is fixed within each sector. The skew sawtooth maps are continuous, degree-one, and piecewise-linear, with two different slopes. They approximate the stable dynamics of the N-dimensional map with an error that goes to zero with the distance from the shrinking point. The results explain the complicated radial pattern of periodic, quasi-periodic, and chaotic dynamics that occurs near shrinking points.
MapEdit: solution to continuous raster map creation
NASA Astrophysics Data System (ADS)
Rančić, Dejan; Djordjevi-Kajan, Slobodanka
2003-03-01
The paper describes MapEdit, MS Windows TM software for georeferencing and rectification of scanned paper maps. The software produces continuous raster maps which can be used as background in geographical information systems. Process of continuous raster map creation using MapEdit "mosaicking" function is also described as well as the georeferencing and rectification algorithms which are used in MapEdit. Our approach for georeferencing and rectification using four control points and two linear transformations for each scanned map part, together with nearest neighbor resampling method, represents low cost—high speed solution that produce continuous raster maps with satisfactory quality for many purposes (±1 pixel). Quality assessment of several continuous raster maps at different scales that have been created using our software and methodology, has been undertaken and results are presented in the paper. For the quality control of the produced raster maps we referred to three wide adopted standards: US Standard for Digital Cartographic Data, National Standard for Spatial Data Accuracy and US National Map Accuracy Standard. The results obtained during the quality assessment process are given in the paper and show that our maps meat all three standards.
A Survey of Mathematical Programming in the Soviet Union (Bibliography),
1982-01-01
ASTAFYEV, N. N., "METHOD OF LINEARIZATION IN CONVEX PROGRAMMING", TR4- Y ZIMN SHKOLY PO MAT PROGRAMMIR I XMEZHN VOPR DROGOBYCH, 72, VOL. 3, 54-73 2...AKADEMIYA KOMMUNLN’NOGO KHOZYAYSTVA (MOSCOW), 72, NO. 93, 70-77 19. GIMELFARB , G, V. MARCHENKO, V. RYBAK, "AUTOMATIC IDENTIFICATION OF IDENTICAL POINTS...DYNAMIC PROGRAMMING (CONTINUED) 25. KOLOSOV, G. Y , "ON ANALYTICAL SOLUTION OF DESIGN PROBLEMS FOR DISTRIBUTED OPTIMAL CONTROL SYSTEMS SUBJECTED TO RANDOM
Solar array maximum power tracking with closed-loop control of a 30-centimeter ion thruster
NASA Technical Reports Server (NTRS)
Gruber, R. P.
1977-01-01
A new solar array/ion thruster system control concept has been developed and demonstrated. An ion thruster beam load is used to automatically and continuously operate an unregulated solar array at its maximum power point independent of variations in solar array voltage and current. Preliminary tests were run which verified that this method of control can be implemented with a few, physically small, signal level components dissipating less than two watts.
A Concise Analysis of Argentina’s Post-Junta Reform of Its Major Security Services
2006-12-01
Carlos Menem (1989–99), respectively a radical and a Peronist, implemented civil-military reform of the security forces. Initially, the reforms...purging the ranks, while Menem used indirect methods, while pointing to the on-going economic crisis to legitimize continuation of his reforms. In all...Politics 35, no. 1 (October 2002): 1. 93 Ibid., 2 94 David Pion-Berlin, 110. 39 civilian government, whereas Menem took advantage of the general
Continuous infusion of antibiotics in critically ill patients.
Smuszkiewicz, Piotr; Szałek, Edyta; Tomczak, Hanna; Grześkowiak, Edmund
2013-02-01
Antibiotics are the most commonly used drugs in intensive care unit patients and their supply should be based on pharmacokinetic/pharmacodynamic rules. The changes that occur in septic patients who are critically ill may be responsible for subtherapeutic antibiotic concentrations leading to poorer clinical outcomes. Evolving in time the disturbed pathophysiology in severe sepsis (high cardiac output, glomerular hyperfiltration) and therapeutic interventions (e.g. haemodynamically active drugs, mechanical ventilation, renal replacement therapy) alters antibiotic pharmacokinetics mainly through an increase in the volume of distribution and altered drug clearance. The lack of new and efficacious drugs and increased bacterial resistance are current problems of contemporary antibiotic therapy. Although intermittent administration is a standard clinical practice, alternative methods of antibiotic administration are sought, which may potentialise effects and reduce toxicity as well as contribute to inhibition of bacterial resistance. A wide range of studies prove that the application of continuous infusion of time-dependent antibiotics (beta-lactams, glycopeptides) is more rational than standard intermittent administration. However, there are also studies which do not confirm the advantage of one method over the other. In spite of controversy the continuous administration of this group of antibiotics is common practice, because the results of both studies point to the higher efficacy of this method in critically ill patients. Authors reviewed the literature to determine whether any clinical benefits exist for administration of time-dependent antibiotics by continuous infusion. Definite specification of the clinical advantage of administration this way over standard dosage requires a large-scale multi-centre randomised controlled trial.
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Field test of classical symmetric encryption with continuous variables quantum key distribution.
Jouguet, Paul; Kunz-Jacques, Sébastien; Debuisschert, Thierry; Fossier, Simon; Diamanti, Eleni; Alléaume, Romain; Tualle-Brouri, Rosa; Grangier, Philippe; Leverrier, Anthony; Pache, Philippe; Painchault, Philippe
2012-06-18
We report on the design and performance of a point-to-point classical symmetric encryption link with fast key renewal provided by a Continuous Variable Quantum Key Distribution (CVQKD) system. Our system was operational and able to encrypt point-to-point communications during more than six months, from the end of July 2010 until the beginning of February 2011. This field test was the first demonstration of the reliability of a CVQKD system over a long period of time in a server room environment. This strengthens the potential of CVQKD for information technology security infrastructure deployments.
Fuzzy logic control of stand-alone photovoltaic system with battery storage
NASA Astrophysics Data System (ADS)
Lalouni, S.; Rekioua, D.; Rekioua, T.; Matagne, E.
Photovoltaic energy has nowadays an increased importance in electrical power applications, since it is considered as an essentially inexhaustible and broadly available energy resource. However, the output power provided via the photovoltaic conversion process depends on solar irradiation and temperature. Therefore, to maximize the efficiency of the photovoltaic energy system, it is necessary to track the maximum power point of the PV array. The present paper proposes a maximum power point tracker (MPPT) method, based on fuzzy logic controller (FLC), applied to a stand-alone photovoltaic system. It uses a sampling measure of the PV array power and voltage then determines an optimal increment required to have the optimal operating voltage which permits maximum power tracking. This method carries high accuracy around the optimum point when compared to the conventional one. The stand-alone photovoltaic system used in this paper includes two bi-directional DC/DC converters and a lead-acid battery bank to overcome the scare periods. One converter works as an MPP tracker, while the other regulates the batteries state of charge and compensates the power deficit to provide a continuous delivery of energy to the load. The Obtained simulation results show the effectiveness of the proposed fuzzy logic controller.
Noise correction on LANDSAT images using a spline-like algorithm
NASA Technical Reports Server (NTRS)
Vijaykumar, N. L. (Principal Investigator); Dias, L. A. V.
1985-01-01
Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.
Blicharska, I; Brzek, A; Durmala, J
2012-01-01
The assessment of influence physiotherapy (DoboMed) to the chest's mobility and the morphology of the ribcage and the posture in short-term intensive physiotherapy in the Department of Rehabilitation. Forty five girls with AIS (mean age- 14.9y.; Cobb angle-range 11-40 degree) were examined. The physiotherapy was been continued for 3 weeks. The angle of trunk rotation (ATR) (Bunnell scoliometer), the posture's morphology (Kasperczyk's Scale) and the chest's mobility index were estimated twice- before and after therapy. After therapy values of ATR decreased by 2°, the chest mobility index increased by 1.3 and total point obtained in the Kasperczyk's Scale has decreased by 1.9 point- which indicates the improvement body posture. All differences are statistically significantly. Also, reported correlations between Cobb angle and ATR and the sum of the points obtained by Kapserczyk's Scale in first exam. Using of physiotherapeutic method in the treatment of AIS provides to the functionally improvement of the chest's mobility, the angle of trunk rotation and the posture in the short time. A used measurement's tools were practical for PT in everyday's work.
NASA Astrophysics Data System (ADS)
Tsutsumi, Yasumasa; Nomoto, Takuya; Ikeda, Hiroaki; Machida, Kazushige
2016-12-01
We propose a spectroscopic method to identify the nodal gap structure in unconventional superconductors. This method is best suited for locating the horizontal line node and for pinpointing the isolated point nodes by measuring polar angle (θ ) resolved zero-energy density of states N (θ ) . This is measured by specific heat or thermal conductivity at low temperatures under a magnetic field. We examine a variety of uniaxially symmetric nodal structures, including point and/or line nodes with linear and quadratic dispersions, by solving the Eilenberger equation in vortex states. It is found that (a) the maxima of N (θ ) continuously shift from the antinodal to the nodal direction (θn) as a field increases accompanying the oscillation pattern reversal at low and high fields. Furthermore, (b) local minima emerge next to θn on both sides, except for the case of the linear point node. These features are robust and detectable experimentally. Experimental results of N (θ ) performed on several superconductors, UPd2Al3,URu2Si2,CuxBi2Se3 , and UPt3, are examined and commented on in light of the present theory.
Grid generation by elliptic partial differential equations for a tri-element Augmentor-Wing airfoil
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1982-01-01
Two efforts to numerically simulate the flow about the Augmentor-Wing airfoil in the cruise configuration using the GRAPE elliptic partial differential equation grid generator algorithm are discussed. The Augmentor-Wing consists of a main airfoil with a slotted trailing edge for blowing and two smaller airfoils shrouding the blowing jet. The airfoil and the algorithm are described, and the application of GRAPE to an unsteady viscous flow simulation and a transonic full-potential approach is considered. The procedure involves dividing a complicated flow region into an arbitrary number of zones and ensuring continuity of grid lines, their slopes, and their point distributions across the zonal boundaries. The method for distributing the body-surface grid points is discussed.
NASA Astrophysics Data System (ADS)
Katzav, Eytan
2013-04-01
In this paper, a mode of using the Dynamic Renormalization Group (DRG) method is suggested in order to cope with inconsistent results obtained when applying it to a continuous family of one-dimensional nonlocal models. The key observation is that the correct fixed-point dynamical system has to be identified during the analysis in order to account for all the relevant terms that are generated under renormalization. This is well established for static problems, however poorly implemented in dynamical ones. An application of this approach to a nonlocal extension of the Kardar-Parisi-Zhang equation resolves certain problems in one-dimension. Namely, obviously problematic predictions are eliminated and the existing exact analytic results are recovered.
2014-01-01
Background Objective physical assessment of patients with lumbar spondylosis involves plain film radiographs (PFR) viewing and interpretation by the radiologists. Physiotherapists also routinely assess PFR within the scope of their practice. However, studies appraising the level of agreement of physiotherapists’ PFR interpretation with radiologists are not common in Ghana. Method Forty-one (41) physiotherapists took part in the cross-sectional survey. An assessment guide was developed from findings of the interpretation of three PFR of patients with lumbar spondylosis by a radiologist. The three PFR were selected from a pool of different radiographs based on clarity, common visible pathological features, coverage body segments and short post production period. Physiotherapists were required to view the same PFR after which they were assessed with the assessment guide according to the number of features identified correctly or incorrectly. The score range on the assessment form was 0–24, interpreted as follow: 0–8 points (low), 9–16 points (moderate) and 17–24 points (high) levels of agreement. Data were analyzed using one sample t-test and fisher’s exact test at α = 0.05. Results The mean score of interpretation for the physiotherapists was 12.7 ± 2.6 points compared to the radiologist’s interpretation of 24 points (assessment guide). The physiotherapists’ levels were found to be significantly associated with their academic qualification (p = 0.006) and sex (p = 0.001). However, their levels of agreement were not significantly associated with their age group (p = 0.098), work settings (p = 0.171), experience (p = 0.666), preferred PFR view (p = 0.088) and continuing education (p = 0.069). Conclusions The physiotherapists’ skills fall short of expectation for interpreting PFR of patients with lumbar spondylosis. The levels of agreement with radiologist’s interpretation have no link with year of clinial practice, age, work settings and continuing education. Thus, routine PFR viewing techniques should be made a priority in physiotherapists’ continuing professional education. PMID:24678695
Grid-based Continual Analysis of Molecular Interior for Drug Discovery, QSAR and QSPR.
Potemkin, Andrey V; Grishina, Maria A; Potemkin, Vladimir A
2017-01-01
In 1979, R.D.Cramer and M.Milne made a first realization of 3D comparison of molecules by aligning them in space and by mapping their molecular fields to a 3D grid. Further, this approach was developed as the DYLOMMS (Dynamic Lattice- Oriented Molecular Modelling System) approach. In 1984, H.Wold and S.Wold proposed the use of partial least squares (PLS) analysis, instead of principal component analysis, to correlate the field values with biological activities. Then, in 1988, the method which was called CoMFA (Comparative Molecular Field Analysis) was introduced and the appropriate software became commercially available. Since 1988, a lot of 3D QSAR methods, algorithms and their modifications are introduced for solving of virtual drug discovery problems (e.g., CoMSIA, CoMMA, HINT, HASL, GOLPE, GRID, PARM, Raptor, BiS, CiS, ConGO,). All the methods can be divided into two groups (classes):1. Methods studying the exterior of molecules; 2) Methods studying the interior of molecules. A series of grid-based computational technologies for Continual Molecular Interior analysis (CoMIn) are invented in the current paper. The grid-based analysis is fulfilled by means of a lattice construction analogously to many other grid-based methods. The further continual elucidation of molecular structure is performed in various ways. (i) In terms of intermolecular interactions potentials. This can be represented as a superposition of Coulomb, Van der Waals interactions and hydrogen bonds. All the potentials are well known continual functions and their values can be determined in all lattice points for a molecule. (ii) In the terms of quantum functions such as electron density distribution, Laplacian and Hamiltonian of electron density distribution, potential energy distribution, the highest occupied and the lowest unoccupied molecular orbitals distribution and their superposition. To reduce time of calculations using quantum methods based on the first principles, an original quantum free-orbital approach AlteQ is proposed. All the functions can be calculated using a quantum approach at a sufficient level of theory and their values can be determined in all lattice points for a molecule. Then, the molecules of a dataset can be superimposed in the lattice for the maximal coincidence (or minimal deviations) of the potentials (i) or the quantum functions (ii). The methods and criteria of the superimposition are discussed. After that a functional relationship between biological activity or property and characteristics of potentials (i) or functions (ii) is created. The methods of the quantitative relationship construction are discussed. New approaches for rational virtual drug design based on the intermolecular potentials and quantum functions are invented. All the invented methods are realized at www.chemosophia.com web page. Therefore, a set of 3D QSAR approaches for continual molecular interior study giving a lot of opportunities for virtual drug discovery, virtual screening and ligand-based drug design are invented. The continual elucidation of molecular structure is performed in the terms of intermolecular interactions potentials and in the terms of quantum functions such as electron density distribution, Laplacian and Hamiltonian of electron density distribution, potential energy distribution, the highest occupied and the lowest unoccupied molecular orbitals distribution and their superposition. To reduce time of calculations using quantum methods based on the first principles, an original quantum free-orbital approach AlteQ is proposed. The methods of the quantitative relationship construction are discussed. New approaches for rational virtual drug design based on the intermolecular potentials and quantum functions are invented. All the invented methods are realized at www.chemosophia.com web page. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
An analysis of hypercritical states in elastic and inelastic systems
NASA Astrophysics Data System (ADS)
Kowalczk, Maciej
The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.
Leelarathna, Lalantha; English, Shane W.; Thabit, Hood; Caldwell, Karen; Allen, Janet M.; Kumareswaran, Kavita; Wilinska, Malgorzata E.; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L.; Burnstein, Rowan
2014-01-01
Abstract Objective: Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle® Navigator® (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. Subjects and Methods: In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m2; mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6–19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1–6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. Results: In total, 1,060 CGM–ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122–213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Conclusions: Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non–critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements. PMID:24180327
Relationship between resolution and accuracy of four intraoral scanners in complete-arch impressions
Pascual-Moscardó, Agustín; Camps, Isabel
2018-01-01
Background The scanner does not measure the dental surface continually. Instead, it generates a point cloud, and these points are then joined to form the scanned object. This approximation will depend on the number of points generated (resolution), which can lead to low accuracy (trueness and precision) when fewer points are obtained. The purpose of this study is to determine the resolution of four intraoral digital imaging systems and to demonstrate the relationship between accuracy and resolution of the intraoral scanner in impressions of a complete dental arch. Material and Methods A master cast of the complete maxillary arch was prepared with different dental preparations. Using four digital impression systems, the cast was scanned inside of a black methacrylate box, obtaining a total of 40 digital impressions from each scanner. The resolution was obtained by dividing the number of points of each digital impression by the total surface area of the cast. Accuracy was evaluated using a three-dimensional measurement software, using the “best alignment” method of the casts with a highly faithful reference model obtained from an industrial scanner. Pearson correlation was used for statistical analysis of the data. Results Of the intraoral scanners, Omnicam is the system with the best resolution, with 79.82 points per mm2, followed by True Definition with 54.68 points per mm2, Trios with 41.21 points per mm2, and iTero with 34.20 points per mm2. However, the study found no relationship between resolution and accuracy of the study digital impression systems (P >0.05), except for Omnicam and its precision. Conclusions The resolution of the digital impression systems has no relationship with the accuracy they achieve in the impression of a complete dental arch. The study found that the Omnicam scanner is the system that obtains the best resolution, and that as the resolution increases, its precision increases. Key words:Trueness, precision, accuracy, resolution, intraoral scanner, digital impression. PMID:29750097
Human Population Decline in North America during the Younger Dryas
NASA Astrophysics Data System (ADS)
Anderson, D. G.; Goodyear, A. C.; Stafford, T. W., Jr.; Kennett, J.; West, A.
2009-12-01
There is ongoing debate about a possible human population decline or contraction at the onset of the Younger Dryas (YD) at 12.9 ka. We used two methods to test whether the YD affected human population levels: (1) frequency analyses of Paleoindian projectile points, and (2) summed probability analyses of radiocarbon (14C) dates. The results suggest that a significant decline or reorganization of human populations occurred at 12.9 ka, continued through the initial centuries of the YD chronozone, then rebounded by the end of the YD. FREQUENCY ANALYSES: This method employed projectile point data from the Paleoindian Database of the Americas (PIDBA, http://pidba.utk.edu). We tallied diagnostic projectile points and obtained larger totals for Clovis points than for immediately post-Clovis points, which share an instrument-assisted fluting technique, typically using pressure or indirect percussion. Gainey, Vail, Debert, Redstone, and Cumberland point-styles utilized this method and are comparable to the Folsom style. For the SE U.S., the ratio of Clovis points (n=1993) to post-Clovis points (n=947) reveals a point decline of 52%. For the Great Plains, a comparison of Clovis and fluted points (n=4020) to Folsom points (n=2527) shows a point decline of 37%, which may translate into a population contraction of similar magnitude. In addition, eight major Clovis lithic quarry sites in the SE U.S. exhibit little to no evidence for immediate post-Clovis occupations, implying a major population decline. SUMMED PROBABILITIES: This method involved calibrating relevant 14C dates and combining the probabilities, after which major peaks and troughs in the trends are assumed to reflect changes in human demographics. Using 14C dates from Buchanan et al. (2008), we analyzed multiple regions, including the Southeast and Great Plains. Contrary to Buchanan et al., we found an abrupt, statistically significant decline at 12.9 ka, followed 200 to 900 years later by a rebound in the number of dates. The decline at the YD onset was more than 50%, similar in magnitude to the decline in Clovis-Folsom point ratios. While calibration and sampling factors may affect the trends, this abrupt decline is large and requires explanation. SUMMARY: Even though correlation does not equate with causation, the coeval YD decline in both points and 14C dates appears linked to significant changes in climate and biota, as represented by the megafaunal extinction. While the causes of the YD remain controversial, a human population decline appears to have occurred, at least across parts of North America. Furthermore, the YD onset is associated with the abrupt replacement of Clovis by regional or subregional scale cultural traditions, potentially reflecting decreased range mobility and increased population isolation. Projectile point distributions and summed probability analyses, we argue, are potentially useful approaches for exploring demographic changes at regional scales.
Code of Federal Regulations, 2014 CFR
2014-07-01
... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) GUM AND WOOD CHEMICALS MANUFACTURING POINT SOURCE CATEGORY Wood Rosin, Turpentine and Pine Oil Subcategory § 454.32 Effluent limitations and... manufacture of wood rosin, turpentine and pine oil by a point source subject to the provisions of this...
Code of Federal Regulations, 2012 CFR
2012-07-01
... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) GUM AND WOOD CHEMICALS MANUFACTURING POINT SOURCE CATEGORY Wood Rosin, Turpentine and Pine Oil Subcategory § 454.32 Effluent limitations and... manufacture of wood rosin, turpentine and pine oil by a point source subject to the provisions of this...
Code of Federal Regulations, 2013 CFR
2013-07-01
... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) GUM AND WOOD CHEMICALS MANUFACTURING POINT SOURCE CATEGORY Wood Rosin, Turpentine and Pine Oil Subcategory § 454.32 Effluent limitations and... manufacture of wood rosin, turpentine and pine oil by a point source subject to the provisions of this...
Yochum, Noëlle; Kochzius, Marc; Ampe, Bart; Tuyttens, Frank A. M.
2017-01-01
Scoring reflex responsiveness and injury of aquatic organisms has gained popularity as predictors of discard survival. Given this method relies upon the individual interpretation of scoring criteria, an evaluation of its robustness is done here to test whether protocol-instructed, multiple raters with diverse backgrounds (research scientist, technician, and student) are able to produce similar or the same reflex and injury score for one of the same flatfish (European plaice, Pleuronectes platessa) after experiencing commercial fishing stressors. Inter-rater reliability for three raters was assessed by using a 3-point categorical scale (‘absent’, ‘weak’, ‘strong’) and a tagged visual analogue continuous scale (tVAS, a 10 cm bar split in three labelled sections: 0 for ‘absent’, ‘weak’, ‘moderate’, and ‘strong’) for six reflex responses, and a 4-point scale for four injury types. Plaice (n = 304) were sampled from 17 research beam-trawl deployments during four trips. Fleiss kappa (categorical scores) and intra-class correlation coefficients (ICC, continuous scores) indicated variable inter-rater agreement by reflex type (ranging between 0.55 and 0.88, and 67% and 91% for Fleiss kappa and ICC, respectively), with least agreement among raters on extent of injury (Fleiss kappa between 0.08 and 0.27). Despite differences among raters, which did not significantly influence the relationship between impairment and predicted survival, combining categorical reflex and injury scores always produced a close relationship of such vitality indices and observed delayed mortality. The use of the continuous scale did not improve fit of these models compared with using the reflex impairment index based on categorical scores. Given these findings, we recommend using a 3-point categorical over a continuous scale. We also determined that training rather than experience of raters minimised inter-rater differences. Our results suggest that cost-efficient reflex impairment and injury scoring may be considered a robust technique to evaluate lethal stress and damage of this flatfish species on-board commercial beam-trawl vessels. PMID:28704390
Algorithms for the computation of solutions of the Ornstein-Zernike equation.
Peplow, A T; Beardmore, R E; Bresme, F
2006-10-01
We introduce a robust and efficient methodology to solve the Ornstein-Zernike integral equation using the pseudoarc length (PAL) continuation method that reformulates the integral equation in an equivalent but nonstandard form. This enables the computation of solutions in regions where the compressibility experiences large changes or where the existence of multiple solutions and so-called branch points prevents Newton's method from converging. We illustrate the use of the algorithm with a difficult problem that arises in the numerical solution of integral equations, namely the evaluation of the so-called no-solution line of the Ornstein-Zernike hypernetted chain (HNC) integral equation for the Lennard-Jones potential. We are able to use the PAL algorithm to solve the integral equation along this line and to connect physical and nonphysical solution branches (both isotherms and isochores) where appropriate. We also show that PAL continuation can compute solutions within the no-solution region that cannot be computed when Newton and Picard methods are applied directly to the integral equation. While many solutions that we find are new, some correspond to states with negative compressibility and consequently are not physical.
NASA Astrophysics Data System (ADS)
Abdolkader, Tarek M.; Shaker, Ahmed; Alahmadi, A. N. M.
2018-07-01
With the continuous miniaturization of electronic devices, quantum-mechanical effects such as tunneling become more effective in many device applications. In this paper, a numerical simulation tool is developed under a MATLAB environment to calculate the tunneling probability and current through an arbitrary potential barrier comparing three different numerical techniques: the finite difference method, transfer matrix method, and transmission line method. For benchmarking, the tool is applied to many case studies such as the rectangular single barrier, rectangular double barrier, and continuous bell-shaped potential barrier, each compared to analytical solutions and giving the dependence of the error on the number of mesh points. In addition, a thorough study of the J ‑ V characteristics of MIM and MIIM diodes, used as rectifiers for rectenna solar cells, is presented and simulations are compared to experimental results showing satisfactory agreement. On the undergraduate level, the tool provides a deeper insight for students to compare numerical techniques used to solve various tunneling problems and helps students to choose a suitable technique for a certain application.
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1989-01-01
To study the problems of encoding visual images for use with a Sparse Distributed Memory (SDM), I consider a specific class of images- those that consist of several pieces, each of which is a line segment or an arc of a circle. This class includes line drawings of characters such as letters of the alphabet. I give a method of representing a segment of an arc by five numbers in a continuous way; that is, similar arcs have similar representations. I also give methods for encoding these numbers as bit strings in an approximately continuous way. The set of possible segments and arcs may be viewed as a five-dimensional manifold M, whose structure is like a Mobious strip. An image, considered to be an unordered set of segments and arcs, is therefore represented by a set of points in M - one for each piece. I then discuss the problem of constructing a preprocessor to find the segments and arcs in these images, although a preprocessor has not been developed. I also describe a possible extension of the representation.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Hybrid least squares multivariate spectral analysis methods
Haaland, David M.
2004-03-23
A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.
Hybrid least squares multivariate spectral analysis methods
Haaland, David M.
2002-01-01
A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.
Homogeneity study of fixed-point continuous marine environmental and meteorological data: a review
NASA Astrophysics Data System (ADS)
Yang, Jinkun; Yang, Yang; Miao, Qingsheng; Dong, Mingmei; Wan, Fangfang
2018-02-01
The principle of inhomogeneity and the classification of homogeneity test methods are briefly described, and several common inhomogeneity methods and relative merits are described in detail. Then based on the applications of the different homogeneity methods to the ground meteorological data and marine environment data, the present status and the progress are reviewed. At present, the homogeneity research of radiosonde and ground meteorological data is mature at home and abroad, and the research and application in the marine environmental data should also be given full attention. To carry out a variety of test and correction methods combined with the use of multi-mode test system, will make the results more reasonable and scientific, and also can be used to provide accurate first-hand information for the coastal climate change researches.
Continuous Problem of Function Continuity
ERIC Educational Resources Information Center
Jayakody, Gaya; Zazkis, Rina
2015-01-01
We examine different definitions presented in textbooks and other mathematical sources for "continuity of a function at a point" and "continuous function" in the context of introductory level Calculus. We then identify problematic issues related to definitions of continuity and discontinuity: inconsistency and absence of…
Dew-point hygrometry system for measurement of evaporative water loss in infants.
Ariagno, R L; Glotzbach, S F; Baldwin, R B; Rector, D M; Bowley, S M; Moffat, R J
1997-03-01
Evaporation of water from the skin is an important mechanism in thermal homeostasis. Resistance hygrometry, in which the water vapor pressure gradient above the skin surface is calculated, has been the measurement method of choice in the majority of pediatric investigations. However, resistance hygrometry is influenced by changes in ambient conditions such as relative humidity, surface temperature, and convection currents. We have developed a ventilated capsule method that minimized these potential sources of measurement error and that allowed second-by-second, long-term, continuous measurements of evaporative water loss in sleeping infants. Air with a controlled reference humidity (dew-point temperature = 0 degree C) is delivered to a small, lightweight skin capsule and mixed with the vapor on the surface of the skin. The dew point of the resulting mixture is measured by using a chilled mirror dew-point hygrometer. The system indicates leaks, is mobile, and is accurate within 2%, as determined by gravimetric calibration. Examples from a recording of a 13-wk-old full-term infant obtained by using the system give evaporative water loss rates of approximately 0.02 mgH2O.cm-2.min-1 for normothermic baseline conditions and values up to 0.4 mgH2O.cm-2. min-1 when the subject was being warmed. The system is effective for clinical investigations that require dynamic measurements of water loss.
NASA Astrophysics Data System (ADS)
Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.
2016-06-01
High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.
Optically controllable nanobreaking of metallic nanowires
NASA Astrophysics Data System (ADS)
Zhou, Lina; Lu, Jinsheng; Yang, Hangbo; Luo, Si; Wang, Wei; Lv, Jun; Qiu, Min; Li, Qiang
2017-02-01
Nanobreaking of nanowires has shown its necessity for manufacturing integrated nanodevices as nanojoining does. In this letter, we develop a method for breaking gold pentagonal nanowires by taking advantage of the photothermal effect with a 532 nm continuous-wave (CW) laser. The critical power required for nanobreaking is much lower for perpendicular polarization than that for parallel polarization. By controlling the polarization and the power of the irradiation light for nanobreaking, the nanowires can be cut into segments with gap widths ranging from dozens of nanometers to several micrometers. This CW light-induced single point nanobreaking of metallic nanowires provides a highly useful and promising method in constructing nanosystems.
Quantitative Tomography for Continuous Variable Quantum Systems
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
2018-03-01
We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.
The numerical calculation of laminar boundary-layer separation
NASA Technical Reports Server (NTRS)
Klineberg, J. M.; Steger, J. L.
1974-01-01
Iterative finite-difference techniques are developed for integrating the boundary-layer equations, without approximation, through a region of reversed flow. The numerical procedures are used to calculate incompressible laminar separated flows and to investigate the conditions for regular behavior at the point of separation. Regular flows are shown to be characterized by an integrable saddle-type singularity that makes it difficult to obtain numerical solutions which pass continuously into the separated region. The singularity is removed and continuous solutions ensured by specifying the wall shear distribution and computing the pressure gradient as part of the solution. Calculated results are presented for several separated flows and the accuracy of the method is verified. A computer program listing and complete solution case are included.
Method for controlling boiling point distribution of coal liquefaction oil product
Anderson, R.P.; Schmalzer, D.K.; Wright, C.H.
1982-12-21
The relative ratio of heavy distillate to light distillate produced in a coal liquefaction process is continuously controlled by automatically and continuously controlling the ratio of heavy distillate to light distillate in a liquid solvent used to form the feed slurry to the coal liquefaction zone, and varying the weight ratio of heavy distillate to light distillate in the liquid solvent inversely with respect to the desired weight ratio of heavy distillate to light distillate in the distillate fuel oil product. The concentration of light distillate and heavy distillate in the liquid solvent is controlled by recycling predetermined amounts of light distillate and heavy distillate for admixture with feed coal to the process in accordance with the foregoing relationships. 3 figs.
Method for controlling boiling point distribution of coal liquefaction oil product
Anderson, Raymond P.; Schmalzer, David K.; Wright, Charles H.
1982-12-21
The relative ratio of heavy distillate to light distillate produced in a coal liquefaction process is continuously controlled by automatically and continuously controlling the ratio of heavy distillate to light distillate in a liquid solvent used to form the feed slurry to the coal liquefaction zone, and varying the weight ratio of heavy distillate to light distillate in the liquid solvent inversely with respect to the desired weight ratio of heavy distillate to light distillate in the distillate fuel oil product. The concentration of light distillate and heavy distillate in the liquid solvent is controlled by recycling predetermined amounts of light distillate and heavy distillate for admixture with feed coal to the process in accordance with the foregoing relationships.
NASA Astrophysics Data System (ADS)
Zhang, Yu-Feng; Zhang, Xiang-Zhi; Dong, Huan-He
2017-12-01
Two new shift operators are introduced for which a few differential-difference equations are generated by applying the R-matrix method. These equations can be reduced to the standard Toda lattice equation and (1+1)-dimensional and (2+1)-dimensional Toda-type equations which have important applications in hydrodynamics, plasma physics, and so on. Based on these consequences, we deduce the Hamiltonian structures of two discrete systems. Finally, we obtain some new infinite conservation laws of two discrete equations and employ Lie-point transformation group to obtain some continuous symmetries and part of invariant solutions for the (1+1) and (2+1)-dimensional Toda-type equations. Supported by the Fundamental Research Funds for the Central University under Grant No. 2017XKZD11
NASA Astrophysics Data System (ADS)
Diodato, A.; Cafarelli, A.; Schiappacasse, A.; Tognarelli, S.; Ciuti, G.; Menciassi, A.
2018-02-01
High intensity focused ultrasound (HIFU) is an emerging therapeutic solution that enables non-invasive treatment of several pathologies, mainly in oncology. On the other hand, accurate targeting of moving abdominal organs (e.g. liver, kidney, pancreas) is still an open challenge. This paper proposes a novel method to compensate the physiological respiratory motion of organs during HIFU procedures, by exploiting a robotic platform for ultrasound-guided HIFU surgery provided with a therapeutic annular phased array transducer. The proposed method enables us to keep the same contact point between the transducer and the patient’s skin during the whole procedure, thus minimizing the modification of the acoustic window during the breathing phases. The motion of the target point is compensated through the rotation of the transducer around a virtual pivot point, while the focal depth is continuously adjusted thanks to the axial electronically steering capabilities of the HIFU transducer. The feasibility of the angular motion compensation strategy has been demonstrated in a simulated respiratory-induced organ motion environment. Based on the experimental results, the proposed method appears to be significantly accurate (i.e. the maximum compensation error is always under 1 mm), thus paving the way for the potential use of this technique for in vivo treatment of moving organs, and therefore enabling a wide use of HIFU in clinics.
A sub-target approach to the kinodynamic motion control of a wheeled mobile robot
NASA Astrophysics Data System (ADS)
Motonaka, Kimiko; Watanabe, Keigo; Maeyama, Shoichi
2018-02-01
A mobile robot with two independently driven wheels is popular, but it is difficult to stabilize it by a continuous controller with a constant gain, due to its nonholonomic property. It is guaranteed that a nonholonomic controlled object can always be converged to an arbitrary point using a switching control method or a quasi-continuous control method based on an invariant manifold in a chained form. From this, the authors already proposed a kinodynamic controller to converge the states of such a two-wheeled mobile robot to the arbitrary target position while avoiding obstacles, by combining the control based on the invariant manifold and the harmonic potential field (HPF). On the other hand, it was confirmed in the previous research that there is a case that the robot cannot avoid the obstacle because there is no enough space to converge the current state to the target state. In this paper, we propose a method that divides the final target position into some sub-target positions and moves the robot step by step, and it is confirmed by the simulation that the robot can converge to the target position while avoiding obstacles using the proposed method.
Common arc method for diffraction pattern orientation.
Bortel, Gábor; Tegze, Miklós
2011-11-01
Very short pulses of X-ray free-electron lasers opened the way to obtaining diffraction signal from single particles beyond the radiation dose limit. For three-dimensional structure reconstruction many patterns are recorded in the object's unknown orientation. A method is described for the orientation of continuous diffraction patterns of non-periodic objects, utilizing intensity correlations in the curved intersections of the corresponding Ewald spheres, and hence named the common arc orientation method. The present implementation of the algorithm optionally takes into account Friedel's law, handles missing data and is capable of determining the point group of symmetric objects. Its performance is demonstrated on simulated diffraction data sets and verification of the results indicates a high orientation accuracy even at low signal levels. The common arc method fills a gap in the wide palette of orientation methods. © 2011 International Union of Crystallography
Novel method of realizing metal freezing points by induced solidification
NASA Astrophysics Data System (ADS)
Ma, C. K.
1997-07-01
The freezing point of a pure metal, tf, is the temperature at which the solid and liquid phases are in equilibrium. The purest metal available is actually a dilute alloy. Normally, the liquidus point of a sample, tl, at which the amount of the solid phase in equilibrium with the liquid phase is minute, provides the closest approximation to tf. Thus the experimental realization of tf is a matter of realizing tl. The common method is to cool a molten sample continuously so that it supercools and recalesces. The highest temperature after recalescence is normally the best experimental value of tl. In the realization, supercooling of the sample at the sample container and the thermometer well is desirable for the formation of dual solid-liquid interfaces to thermally isolate the sample and the thermometer. However, the subsequent recalescence of the supercooled sample requires the formation of a certain amount of solid, which is not minute. Obviously, the plateau temperature is not the liquidus point. In this article we describe a method that minimizes supercooling. The condition that provides tl is closely approached so that the latter may be measured. As the temperature of the molten sample approaches the anticipated value of tl, a small solid of the same alloy is introduced into the sample to induce solidification. In general, solidification does not occur as long as the temperature is above or at tl, and occurs as soon as the sample supercools minutely. Thus tl can be obtained, in principle, by observing the temperature at which induced solidification begins. In case the solid is introduced after the sample has supercooled slightly, a slight recalescence results and the subsequent maximum temperature is a close approximation to tl. We demonstrate that the principle of induced solidification is indeed applicable to freezing point measurements by applying it to the design of a copper-freezing-point cell for industrial applications, in which a supercooled sample is reheated and then induced to solidify by the solidification of an auxiliary sample. Further experimental studies are necessary to assess the practical advantages and disadvantages of the induction method.
Visual analytics of large multidimensional data using variable binned scatter plots
NASA Astrophysics Data System (ADS)
Hao, Ming C.; Dayal, Umeshwar; Sharma, Ratnesh K.; Keim, Daniel A.; Janetzko, Halldór
2010-01-01
The scatter plot is a well-known method of visualizing pairs of two-dimensional continuous variables. Multidimensional data can be depicted in a scatter plot matrix. They are intuitive and easy-to-use, but often have a high degree of overlap which may occlude a significant portion of data. In this paper, we propose variable binned scatter plots to allow the visualization of large amounts of data without overlapping. The basic idea is to use a non-uniform (variable) binning of the x and y dimensions and plots all the data points that fall within each bin into corresponding squares. Further, we map a third attribute to color for visualizing clusters. Analysts are able to interact with individual data points for record level information. We have applied these techniques to solve real-world problems on credit card fraud and data center energy consumption to visualize their data distribution and cause-effect among multiple attributes. A comparison of our methods with two recent well-known variants of scatter plots is included.
Diagnosing cystic fibrosis-related diabetes: current methods and challenges.
Prentice, Bernadette; Hameed, Shihab; Verge, Charles F; Ooi, Chee Y; Jaffe, Adam; Widger, John
2016-07-01
Cystic fibrosis-related diabetes (CFRD) is the end-point of a spectrum of glucose abnormalities in cystic fibrosis that begins with early insulin deficiency and ultimately results in accelerated nutritional decline and loss of lung function. Current diagnostic and management regimens are unable to entirely reverse this clinical decline. This review summarises the current understanding of the pathophysiology of CFRD, the issues associated with using oral glucose tolerance tests in CF and the challenges faced in making the diagnosis of CFRD. Medline database searches were conducted using search terms "Cystic Fibrosis Related Diabetes", "Cystic Fibrosis" AND "glucose", "Cystic Fibrosis" AND "insulin", "Cystic Fibrosis" AND "Diabetes". Additionally, reference lists were studied. Expert commentary: Increasing evidence points to early glucose abnormalities being clinically relevant in cystic fibrosis and as such novel diagnostic methods such as continuous glucose monitoring or 30 minute sampled oral glucose tolerance test (OGTT) may play a key role in the future in the screening and diagnosis of early glucose abnormalities in CF.
Fitting C 2 Continuous Parametric Surfaces to Frontiers Delimiting Physiologic Structures
Bayer, Jason D.
2014-01-01
We present a technique to fit C 2 continuous parametric surfaces to scattered geometric data points forming frontiers delimiting physiologic structures in segmented images. Such mathematical representation is interesting because it facilitates a large number of operations in modeling. While the fitting of C 2 continuous parametric curves to scattered geometric data points is quite trivial, the fitting of C 2 continuous parametric surfaces is not. The difficulty comes from the fact that each scattered data point should be assigned a unique parametric coordinate, and the fit is quite sensitive to their distribution on the parametric plane. We present a new approach where a polygonal (quadrilateral or triangular) surface is extracted from the segmented image. This surface is subsequently projected onto a parametric plane in a manner to ensure a one-to-one mapping. The resulting polygonal mesh is then regularized for area and edge length. Finally, from this point, surface fitting is relatively trivial. The novelty of our approach lies in the regularization of the polygonal mesh. Process performance is assessed with the reconstruction of a geometric model of mouse heart ventricles from a computerized tomography scan. Our results show an excellent reproduction of the geometric data with surfaces that are C 2 continuous. PMID:24782911
NASA Astrophysics Data System (ADS)
Šiljeg, A.; Lozić, S.; Šiljeg, S.
2014-12-01
The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar Hydrostar 4300, GPS devices Ashtech Promark 500 - base, and a Thales Z-Max - rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: to compare the efficiency of 16 different interpolation methods, to discover the most appropriate interpolators for the development of a raster model, to calculate the surface area and volume of Lake Vrana, and to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was ROF multi-quadratic, and the best geostatistical, ordinary cokriging. The mean quadratic error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in 2 phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.
Tanaka, Tagayasu; Inaba, Ryoichi; Aoyama, Atsuhito
2016-01-01
Objectives: This study investigated the actual situation of noise and low-frequency sounds in firework events and their impact on pyrotechnicians. Methods: Data on firework noise and low-frequency sounds were obtained at a point located approximately 100 m away from the launch site of a firework display held in "A" City in 2013. We obtained the data by continuously measuring and analyzing the equivalent continuous sound level (Leq) and the one-third octave band of the noise and low-frequency sounds emanating from the major firework detonations, and predicted sound levels at the original launch site. Results: Sound levels of 100-115 dB and low-frequency sounds of 100-125 dB were observed at night. The maximum and mean Leq values were 97 and 95 dB, respectively. The launching noise level predicted from the sounds (85 dB) at the noise measurement point was 133 dB. Occupational exposure to noise for pyrotechnicians at the remote operation point (located 20-30 m away from the launch site) was estimated to be below 100 dB. Conclusions: Pyrotechnicians are exposed to very loud noise (>100 dB) at the launch point. We believe that it is necessary to implement measures such as fixing earplugs or earmuffs, posting a warning at the workplace, and executing a remote launching operation to prevent hearing loss caused by occupational exposure of pyrotechnicians to noise. It is predicted that both sound levels and low-frequency sounds would be reduced by approximately 35 dB at the remote operation site. PMID:27725489
CONTINUOUS ANALYZER UTILIZING BOILING POINT DETERMINATION
Pappas, W.S.
1963-03-19
A device is designed for continuously determining the boiling point of a mixture of liquids. The device comprises a distillation chamber for boiling a liquid; outlet conduit means for maintaining the liquid contents of said chamber at a constant level; a reflux condenser mounted above said distillation chamber; means for continuously introducing an incoming liquid sample into said reflux condenser and into intimate contact with vapors refluxing within said condenser; and means for measuring the temperature of the liquid flowing through said distillation chamber. (AEC)
Hameed, Waqas; Azmat, Syed Khurram; Ali, Moazzam; Ishaque, Muhammad; Abbas, Ghazunfer; Munroe, Erik; Harrison, Rebecca; Shamsi, Wajahat Hussain; Mustafa, Ghulam; Khan, Omar Farooq; Ali, Safdar; Ahmed, Aftab
2016-01-01
Background The use of long-acting reversible contraceptive (LARC) methods is very low in Pakistan with high discontinuation rates mainly attributed to method-related side effects. Mixed evidence is available on the effectiveness of different client follow-up approaches used to ensure method continuation. We compared the effectiveness of active and passive follow-up approaches in sustaining the use of LARC—and within ‘active’ follow-up, we further compared a telephone versus home-based approach in rural Punjab, Pakistan. Methods This was a 12-month multicentre non-inferiority trial conducted in twenty-two (16 rural- and 6 urban-based) franchised reproductive healthcare facilities in district Chakwal of Punjab province, between November 2013 and December 2014. The study comprised of three groups of LARC clients: a) home-based follow-up, b) telephone-based follow-up, and c) passive or needs-based follow-up. Participants in the first two study groups received counselling on scheduled follow-up from the field workers at 1, 3, 6, 9, and 12 month post-insertion whereas participants in the third group were asked to contact the health facility if in need of medical assistance relating to LARC method use. Study participants were recruited with equal allocation to each study group, but participants were not randomized. The analyses are based on 1,246 LARC (intra-uterine contraceptive device and implant) users that completed approximately 12-months of follow-up. The non-inferiority margin was kept at five percentage points for the comparison of active and passive follow-up and six percentage points for telephone and home-based approach. The primary outcome was cumulative probability of method continuation at 12-month among LARC users. Results Women recruited in home-based, telephone-based, and passive groups were 400, 419 and 427, respectively. The cumulative probability of LARC continuation at 12 month was 87.6% (95% CI 83.8 to 90.6) among women who received home-based follow-up; 89.1% (95% CI 85.7, 91.8) who received telephone-based follow-up; and 83.8% (95% CI 79.8 to 87.1) who were in the passive or needs-based follow-up group. The probability of continuation among women who were actively followed-up by field health educators—either through home-based visit or telephone-based follow-up was, 88.3% (95% CI 85.9 to 90.0). An adjusted risk difference of -4.1 (95% CI -7.8 to -0.28; p-value = 0.035) was estimated between active and passive follow-up. Whereas, within the active client follow-up, the telephone-based follow-up was found to be as effective as the home-based follow-up with an adjusted risk difference of 1.8 (95% CI -2.7 to 6.4; p-value = 0.431). Conclusion A passive follow-up approach was 5% inferior to an active follow-up approach; whereas telephone-based follow-up was as effective as the home-based visits in sustaining the use of LARC, and was far more resource efficient. Therefore, active follow-up could improve method continuation especially in the critical post-insertion period. PMID:27584088
Sampling Singular and Aggregate Point Sources of Carbon Dioxide from Space Using OCO-2
NASA Astrophysics Data System (ADS)
Schwandner, F. M.; Gunson, M. R.; Eldering, A.; Miller, C. E.; Nguyen, H.; Osterman, G. B.; Taylor, T.; O'Dell, C.; Carn, S. A.; Kahn, B. H.; Verhulst, K. R.; Crisp, D.; Pieri, D. C.; Linick, J.; Yuen, K.; Sanchez, R. M.; Ashok, M.
2016-12-01
Anthropogenic carbon dioxide (CO2) sources increasingly tip the natural balance between natural carbon sources and sinks. Space-borne measurements offer opportunities to detect and analyze point source emission signals anywhere on Earth. Singular continuous point source plumes from power plants or volcanoes turbulently mix into their proximal background fields. In contrast, plumes of aggregate point sources such as cities, and transportation or fossil fuel distribution networks, mix into each other and may therefore result in broader and more persistent excess signals of total column averaged CO2 (XCO2). NASA's first satellite dedicated to atmospheric CO2observation, the Orbiting Carbon Observatory-2 (OCO-2), launched in July 2014 and now leads the afternoon constellation of satellites (A-Train). While continuously collecting measurements in eight footprints across a narrow ( < 10 km) wide swath it occasionally cross-cuts coincident emission plumes. For singular point sources like volcanoes and coal fired power plants, we have developed OCO-2 data discovery tools and a proxy detection method for plumes using SO2-sensitive TIR imaging data (ASTER). This approach offers a path toward automating plume detections with subsequent matching and mining of OCO-2 data. We found several distinct singular source CO2signals. For aggregate point sources, we investigated whether OCO-2's multi-sounding swath observing geometry can reveal intra-urban spatial emission structures in the observed variability of XCO2 data. OCO-2 data demonstrate that we can detect localized excess XCO2 signals of 2 to 6 ppm against suburban and rural backgrounds. Compared to single-shot GOSAT soundings which detected urban/rural XCO2differences in megacities (Kort et al., 2012), the OCO-2 swath geometry opens up the path to future capabilities enabling urban characterization of greenhouse gases using hundreds of soundings over a city at each satellite overpass. California Institute of Technology
Identifying and counting point defects in carbon nanotubes.
Fan, Yuwei; Goldsmith, Brett R; Collins, Philip G
2005-12-01
The prevailing conception of carbon nanotubes and particularly single-walled carbon nanotubes (SWNTs) continues to be one of perfectly crystalline wires. Here, we demonstrate a selective electrochemical method that labels point defects and makes them easily visible for quantitative analysis. High-quality SWNTs are confirmed to contain one defect per 4 microm on average, with a distribution weighted towards areas of SWNT curvature. Although this defect density compares favourably to high-quality, silicon single-crystals, the presence of a single defect can have tremendous electronic effects in one-dimensional conductors such as SWNTs. We demonstrate a one-to-one correspondence between chemically active point defects and sites of local electronic sensitivity in SWNT circuits, confirming the expectation that individual defects may be critical to understanding and controlling variability, noise and chemical sensitivity in SWNT electronic devices. By varying the SWNT synthesis technique, we further show that the defect spacing can be varied over orders of magnitude. The ability to detect and analyse point defects, especially at very low concentrations, indicates the promise of this technique for quantitative process analysis, especially in nanoelectronics development.
NASA Astrophysics Data System (ADS)
Von, W. C.; Ismail, M. A. M.
2017-10-01
The knowing of geological profile ahead of tunnel face is significant to minimize the risk in tunnel excavation work and cost control in preventative measure. Due to mountainous area, site investigation with vertical boring is not recommended to obtain the geological profile for Pahang-Selangor Raw Water Transfer project. Hence, tunnel seismic prediction (TSP) method is adopted to predict the geological profile ahead of tunnel face. In order to evaluate the TSP results, IBM SPSS Statistic 22 is used to run artificial neural network (ANN) analysis to back calculate the predicted Rock Grade Points (JH) from actual Rock Grade Points (JH) using Vp, Vs and Vp/Vs from TSP. The results show good correlation between predicted Rock Grade points and actual Rock Grade Points (JH). In other words, TSP can provide geological profile prediction ahead of tunnel face significantly while allowing continuously TBM excavation works. Identifying weak zones or faults ahead of tunnel face is crucial for preventative measures to be carried out in advance for a safer tunnel excavation works.
Fractional charge and inter-Landau-level states at points of singular curvature.
Biswas, Rudro R; Son, Dam Thanh
2016-08-02
The quest for universal properties of topological phases is fundamentally important because these signatures are robust to variations in system-specific details. Aspects of the response of quantum Hall states to smooth spatial curvature are well-studied, but challenging to observe experimentally. Here we go beyond this prevailing paradigm and obtain general results for the response of quantum Hall states to points of singular curvature in real space; such points may be readily experimentally actualized. We find, using continuum analytical methods, that the point of curvature binds an excess fractional charge and sequences of quantum states split away, energetically, from the degenerate bulk Landau levels. Importantly, these inter-Landau-level states are bound to the topological singularity and have energies that are universal functions of bulk parameters and the curvature. Our exact diagonalization of lattice tight-binding models on closed manifolds demonstrates that these results continue to hold even when lattice effects are significant. An important technological implication of these results is that these inter-Landau-level states, being both energetically and spatially isolated quantum states, are promising candidates for constructing qubits for quantum computation.
Visualization of 3-D tensor fields
NASA Technical Reports Server (NTRS)
Hesselink, L.
1996-01-01
Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.
NASA Astrophysics Data System (ADS)
Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael
2017-09-01
Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
47 CFR 101.101 - Frequency availability.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE...—(Part 78) CC: Common Carrier Fixed Point-to-Point Microwave Service—(Part 101, Subparts C & I) DBS... Distribution Service—(Part 21) OFS: Private Operational Fixed Point-to-Point Microwave Service—(Part 101...
47 CFR 101.101 - Frequency availability.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE...—(Part 78) CC: Common Carrier Fixed Point-to-Point Microwave Service—(Part 101, Subparts C & I) DBS... Distribution Service—(Part 21) OFS: Private Operational Fixed Point-to-Point Microwave Service—(Part 101...
47 CFR 101.107 - Frequency tolerance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE... to private operational fixed point-to-point microwave and stations providing MVDDS. 5 For private operational fixed point-to-point microwave systems, with a channel greater than or equal to 50 KHz bandwidth...
47 CFR 101.101 - Frequency availability.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE...—(Part 78) CC: Common Carrier Fixed Point-to-Point Microwave Service—(Part 101, Subparts C & I) DBS... Distribution Service—(Part 21) OFS: Private Operational Fixed Point-to-Point Microwave Service—(Part 101...
Vargas, E; Ruiz, M A; Campuzano, S; Reviejo, A J; Pingarrón, J M
2016-03-31
A non-destructive, rapid and simple to use sensing method for direct determination of glucose in non-processed fruits is described. The strategy involved on-line microdialysis sampling coupled with a continuous flow system with amperometric detection at an enzymatic biosensor. Apart from direct determination of glucose in fruit juices and blended fruits, this work describes for the first time the successful application of an enzymatic biosensor-based electrochemical approach to the non-invasive determination of glucose in raw fruits. The methodology correlates, through previous calibration set-up, the amperometric signal generated from glucose in non-processed fruits with its content in % (w/w). The comparison of the obtained results using the proposed approach in different fruits with those provided by other method involving the same commercial biosensor as amperometric detector in stirred solutions pointed out that there were no significant differences. Moreover, in comparison with other available methodologies, this microdialysis-coupled continuous flow system amperometric biosensor-based procedure features straightforward sample preparation, low cost, reduced assay time (sampling rate of 7 h(-1)) and ease of automation. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mojahedi, Mahdi; Shekoohinejad, Hamidreza
2018-02-01
In this paper, temperature distribution in the continuous and pulsed end-pumped Nd:YAG rod crystal is determined using nonclassical and classical heat conduction theories. In order to find the temperature distribution in crystal, heat transfer differential equations of crystal with consideration of boundary conditions are derived based on non-Fourier's model and temperature distribution of the crystal is achieved by an analytical method. Then, by transferring non-Fourier differential equations to matrix equations, using finite element method, temperature and stress of every point of crystal are calculated in the time domain. According to the results, a comparison between classical and nonclassical theories is represented to investigate rupture power values. In continuous end pumping with equal input powers, non-Fourier theory predicts greater temperature and stress compared to Fourier theory. It also shows that with an increase in relaxation time, crystal rupture power decreases. Despite of these results, in single rectangular pulsed end-pumping condition, with an equal input power, Fourier theory indicates higher temperature and stress rather than non-Fourier theory. It is also observed that, when the relaxation time increases, maximum amounts of temperature and stress decrease.
Advanced continuous cultivation methods for systems microbiology.
Adamberg, Kaarel; Valgepea, Kaspar; Vilu, Raivo
2015-09-01
Increasing the throughput of systems biology-based experimental characterization of in silico-designed strains has great potential for accelerating the development of cell factories. For this, analysis of metabolism in the steady state is essential as only this enables the unequivocal definition of the physiological state of cells, which is needed for the complete description and in silico reconstruction of their phenotypes. In this review, we show that for a systems microbiology approach, high-resolution characterization of metabolism in the steady state--growth space analysis (GSA)--can be achieved by using advanced continuous cultivation methods termed changestats. In changestats, an environmental parameter is continuously changed at a constant rate within one experiment whilst maintaining cells in the physiological steady state similar to chemostats. This increases the resolution and throughput of GSA compared with chemostats, and, moreover, enables following of the dynamics of metabolism and detection of metabolic switch-points and optimal growth conditions. We also describe the concept, challenge and necessary criteria of the systematic analysis of steady-state metabolism. Finally, we propose that such systematic characterization of the steady-state growth space of cells using changestats has value not only for fundamental studies of metabolism, but also for systems biology-based metabolic engineering of cell factories.
A Review and Annotated Bibliography of Training Performance Measurement and Assessment Literature
1988-10-01
work envirorments and orgoIizational climate questomaires. Identifies empirical eaures of Army unit effectiveness . Key Points: Looks at inspection reparts, mission accompil lsl’nt results, eff iclwy measures etc. A-63 ...PROJECT TASK WORK UNIT TRADE/ARI), 12350 Research Parkway ELEMENT NO. NO. NO. ACCESSION NO. Orlando, FL 32826-3276 (continued) 6.3.7.43 A794 4.3.2 C.1 11... effectiveness . Researchers should investigate means for developing more empirical data, better analytic methods, and standardized measurement. Increased
Electromagnetic plane-wave pulse transmission into a Lorentz half-space.
Cartwright, Natalie A
2011-12-01
The propagation of an electromagnetic plane-wave signal obliquely incident upon a Lorentz half-space is studied analytically. Time-domain asymptotic expressions that increase in accuracy with propagation distance are derived by application of uniform saddle point methods on the Fourier-Laplace integral representation of the transmitted field. The results are shown to be continuous in time and comparable with numerical calculations of the field. Arrival times and angles of refraction are given for prominent transient pulse features and the steady-state signal.
An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems
NASA Astrophysics Data System (ADS)
Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri
2018-01-01
The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.
NASA Astrophysics Data System (ADS)
Schmidt, J. B.
1985-09-01
This thesis investigates ways of improving the real-time performance of the Stockpoint Logistics Integrated Communication Environment (SPLICE). Performance evaluation through continuous monitoring activities and performance studies are the principle vehicles discussed. The method for implementing this performance evaluation process is the measurement of predefined performance indexes. Performance indexes for SPLICE are offered that would measure these areas. Existing SPLICE capability to carry out performance evaluation is explored, and recommendations are made to enhance that capability.
2009-03-01
the 1- D local basis functions. The 1-D Lagrange polynomial local basis function, using Legendre -Gauss-Lobatto interpolation points, was defined by...cases were run using 10th order polynomials , with contours from -0.05 to 0.525 K with an interval of 0.025 K...after 700 s for reso- lutions: (a) 20, (b) 10, and (c) 5 m. All cases were run using 10th order polynomials , with contours from -0.05 to 0.525 K
Coûts Et Financement De L'Alphabétisation
NASA Astrophysics Data System (ADS)
Diagne, Amadou Wade
2008-11-01
While the costs of literacy programmes continue to outstrip the resources available, this article argues that much can be done by bringing more efficiency and clarity into accounting and financing procedures. Drawing on the example of Senegal, the author argues for more effective methods of calculating the costs of programmes, analysing the various cost components, managing budgets and evaluating cost- effectiveness. He also points out the need for partnership between different sectors and emphasizes that political stability is very important for positive results.
Care in post-traumatic syndrome due to gender violence: a case report.
Sánchez-Herrero, Héctor; Duarte-Clíments, Gonzalo; González-Pérez, Teodoro; Sánchez-Gómez, María Begoña; Gomariz-Bolarín, David
This article describes a clinical case of a patient attended at a continuous care point for a generalized anxiety disorder, principally due to abuse suffered from her ex partner. The patient was followed up at family nursing clinic, and the appropriate nursing interventions were developed to cover a series of needs prioritized by nurses using the AREA method and taking into account the prioritization of the user herself. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Evaluation of Storage Effects on Commercial, Biodegradable, Synthetic or Bio-sourced Hydraulic Fluid
2007-01-10
Water Content (ASTM D 6304) Coulometric Karl Fischer Titration for water content was conducted in accordance with ASTM D 6304, Standard Test Method ...Point7 (ASTM D 92) • Lubricity (4-Ball Wear)8 (ASTM D 4172) • Total Acid Number (TAN)9 (ASTM D 664) • Water Content by Karl Fischer Coulometric...2001 and the data from FLTT in 2005. However, FLTT procured a new Karl Fischer water titrator in 2003. But FLTT continued to use the same
A Study of Alternative Quantile Estimation Methods in Newsboy-Type Problems
1980-03-01
decision maker selects to have on hand. The newsboy cost equation may be formulated as a two-piece continuous linear function in the following manner. C(S...number of observations, some approximations may be possible. Three points which are near each other can be assumed to be linear and some estimator using...respectively. Define the value r as: r = [nq + 0.5] , (6) where [X] denotes the largest integer of X. Let us consider an estimate of X as the linear
2006 Global Demilitarization Symposium Volume 1 Presentations
2006-05-04
produce inorganic crystals in continuous-reaction mode: Continuous synthesis of CdSe–ZnS composite nanoparticles in a microfluidic reactor, Hongzhi...crystallize lead azide nanoparticles , and to grow them into dextrinated microparticles; Point of Application Microfluidic Synthesis of Sensitive...National Laboratory Point of Application Synthesis of Sensitive Explosive Mr. Karl Wally, Sandia National Laboratories Session III- A Session
40 CFR 141.100 - Criteria and procedures for public water systems using point-of-entry devices.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Criteria and procedures for public water systems using point-of-entry devices. 141.100 Section 141.100 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Use of Non-Centralized Treatment Devices §...
Custodio, Tomas; Garcia, Jose; Markovski, Jasmina; McKay Gifford, James; Hristovski, Kiril D; Olson, Larry W
2017-12-15
The underlying hypothesis of this study was that pseudo-equilibrium and column testing conditions would provide the same sorbent ranking trends although the values of sorbents' performance descriptors (e.g. sorption capacity) may vary because of different kinetics and competition effects induced by the two testing approaches. To address this hypothesis, nano-enabled hybrid media were fabricated and its removal performances were assessed for two model contaminants under multi-point batch pseudo-equilibrium and continuous-flow conditions. Calculation of simultaneous removal capacity indices (SRC) demonstrated that the more resource demanding continuous-flow tests are able to generate the same performance rankings as the ones obtained by conducing the simpler pseudo-equilibrium tests. Furthermore, continuous overlap between the 98% confidence boundaries for each SRC index trend, not only validated the hypothesis that both testing conditions provide the same ranking trends, but also pointed that SRC indices are statistically the same for each media, regardless of employed method. In scenarios where rapid screening of new media is required to obtain the best performing synthesis formulation, use of pseudo-equilibrium tests proved to be reliable. Considering that kinetics induced effects on sorption capacity must not be neglected, more resource demanding column test could be conducted only with the top performing media that exhibit the highest sorption capacity. Copyright © 2017 Elsevier B.V. All rights reserved.
A Longitudinal Study of Childhood ADHD and Substance Dependence Disorders in Early Adulthood
Breyer, Jessie L.; Lee, Susanne; Winters, Ken; August, Gerald; Realmuto, George
2014-01-01
Attention-deficit/hyperactivity disorder (ADHD) is a childhood disorder that is associated with many behavioral and social problems. These problems may continue when an individual continues to meet criteria for ADHD as an adult. In this study, we describe the outcome patterns for three different groups: individuals who had ADHD as children, but no longer meet criteria as adults (Childhood-Limited ADHD, n = 71); individuals who met ADHD criteria as children and continue to meet criteria as young adults (Persistent ADHD n = 79); and a control group of individuals who did not meet ADHD diagnostic criteria in childhood or adulthood (n = 69). Groups were compared to examine differences in change in rates of alcohol, marijuana, and nicotine dependence over three time points in young adulthood (mean ages 18, 20 and 22 years). The method used is notable as this longitudinal study followed participants from childhood into young adulthood instead of relying on retrospective self-reports from adult participants. Results indicated that there were no significant group differences in change in rates of substance dependence over time. However, individuals whose ADHD persisted into adulthood were significantly more likely to meet DSM-IV criteria for alcohol, marijuana, and nicotine dependence across the three time points after controlling for age, sex, childhood stimulant medication use, and childhood conduct problems. Implications of these findings, as well as recommendations for future research, are discussed. PMID:24731117
Acoustic method respiratory rate monitoring is useful in patients under intravenous anesthesia.
Ouchi, Kentaro; Fujiwara, Shigeki; Sugiyama, Kazuna
2017-02-01
Respiratory depression can occur during intravenous general anesthesia without tracheal intubation. A new acoustic method for respiratory rate monitoring, RRa ® (Masimo Corp., Tokyo, Japan), has been reported to show good reliability in post-anesthesia care and emergency units. The purpose of this study was to investigate the reliability of the acoustic method for measurement of respiratory rate during intravenous general anesthesia, as compared with capnography. Patients with dental anxiety undergoing dental treatment under intravenous anesthesia without tracheal intubation were enrolled in this study. Respiratory rate was recorded every 30 s using the acoustic method and capnography, and detectability of respiratory rate was investigated for both methods. This study used a cohort study design. In 1953 recorded respiratory rate data points, the number of detected points by the acoustic method (1884, 96.5 %) was significantly higher than that by capnography (1682, 86.1 %) (P < 0.0001). In the intraoperative period, there was a significant difference in the LOA (95 % limits of agreement of correlation between difference and average of the two methods)/ULLOA (under the lower limit of agreement) in terms of use or non-use of a dental air turbine (P < 0.0001). In comparison between capnography, the acoustic method is useful for continuous monitoring of respiratory rate in spontaneously breathing subjects undergoing dental procedures under intravenous general anesthesia. However, the acoustic method might not accurately detect in cases in with dental air turbine.
Informative graphing of continuous safety variables relative to normal reference limits.
Breder, Christopher D
2018-05-16
Interpreting graphs of continuous safety variables can be complicated because differences in age, gender, and testing site methodologies data may give rise to multiple reference limits. Furthermore, data below the lower limit of normal are compressed relative to those points above the upper limit of normal. The objective of this study is to develop a graphing technique that addresses these issues and is visually intuitive. A mock dataset with multiple reference ranges is initially used to develop the graphing technique. Formulas are developed for conditions where data are above the upper limit of normal, normal, below the lower limit of normal, and below the lower limit of normal when the data value equals zero. After the formulae are developed, an anonymized dataset from an actual set of trials for an approved drug is evaluated comparing the technique developed in this study to standard graphical methods. Formulas are derived for the novel graphing method based on multiples of the normal limits. The formula for values scaled between the upper and lower limits of normal is a novel application of a readily available scaling formula. The formula for the lower limit of normal is novel and addresses the issue of this value potentially being indeterminate when the result to be scaled as a multiple is zero. The formulae and graphing method described in this study provides a visually intuitive method to graph continuous safety data including laboratory values, vital sign data.
Winkler, Harvey; Jacoby, Karny; Kalota, Susan; Snyder, Jeffrey; Cline, Kevin; Robertson, Kaiser; Kahan, Randall; Green, Lonny; McCammon, Kurt; Rovner, Eric; Rardin, Charles
The "Stress Incontinence Control, Efficacy and Safety Study" (SUCCESS) is a phase III study of the Vesair Balloon in women with stress urinary incontinence who had failed conservative therapy, and either failed surgery, were not candidates for surgery, or chose not to have surgery. The safety and efficacy of the balloon at 12 months is reported for those participants in the treatment arm who elected to continue with the SUCCESS trial beyond the primary end point at 3 months. The SUCCESS trial is a multicenter, prospective, single-blinded, randomized, sham-controlled study. Participants were randomized on a 2.33:1 basis to either Vesair Balloon placement or placebo. The primary efficacy end point was a composite of both a greater than 50% reduction from baseline on 1-hour provocative pad weight test and an at least 10-point improvement in symptoms on the Incontinence Quality of Life questionnaire assessed at the 3-month study visit. Patients in the treatment arm who opted to continue in the trial were followed up prospectively up to 12 months. A total of 221 participants were randomized, including 157 in the treatment arm and 64 in the control arm. Sixty-seven participants in the treatment arm (42.7% of participants enrolled) were evaluated at 12 months, with 56.3% achieving the composite end point and 78.7% having greater than 50% reduction in pad weight from baseline in a per-protocol analysis. In an intent-to-treat analysis treating all participants who did not continue with the balloon as failures, 24% of the participants achieved the composite end point and 33.6% had a greater than 50% reduction in pad weight from baseline. Treatment-related adverse events in this group included dysuria (40.1%), gross hematuria (36.9%), and urinary tract infection (26.1%). In this phase III trial, symptom relief was maintained for those participants who continued therapy for 12 months. The balloon was found to be safe with no device- or procedure-related serious adverse events reported. Additional studies are warranted to determine which patient populations are more tolerant of the balloon and to assess the efficacy and safety of its longer-term use. Additional screening methods, including screening patients for balloon tolerability, are warranted to reduce participant withdrawals.
49 CFR 236.823 - Switch, trailing point.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Switch, trailing point. 236.823 Section 236.823 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Switch, trailing point. A switch, the points of which face away from traffic approaching in the direction...
40 CFR 420.29 - Point of compliance monitoring.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Point of compliance monitoring. 420.29 Section 420.29 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Sintering Subcategory § 420.29 Point...
49 CFR 236.818 - Switch, facing point.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Switch, facing point. 236.818 Section 236.818 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Switch, facing point. A switch, the points of which face traffic approaching in the direction for which...
49 CFR 236.823 - Switch, trailing point.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Switch, trailing point. 236.823 Section 236.823 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Switch, trailing point. A switch, the points of which face away from traffic approaching in the direction...
49 CFR 236.823 - Switch, trailing point.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 4 2012-10-01 2012-10-01 false Switch, trailing point. 236.823 Section 236.823 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Switch, trailing point. A switch, the points of which face away from traffic approaching in the direction...
49 CFR 236.818 - Switch, facing point.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 4 2012-10-01 2012-10-01 false Switch, facing point. 236.818 Section 236.818 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Switch, facing point. A switch, the points of which face traffic approaching in the direction for which...
49 CFR 236.818 - Switch, facing point.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Switch, facing point. 236.818 Section 236.818 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Switch, facing point. A switch, the points of which face traffic approaching in the direction for which...
49 CFR 236.823 - Switch, trailing point.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Switch, trailing point. 236.823 Section 236.823 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Switch, trailing point. A switch, the points of which face away from traffic approaching in the direction...
49 CFR 236.818 - Switch, facing point.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Switch, facing point. 236.818 Section 236.818 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Switch, facing point. A switch, the points of which face traffic approaching in the direction for which...
49 CFR 236.818 - Switch, facing point.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Switch, facing point. 236.818 Section 236.818 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Switch, facing point. A switch, the points of which face traffic approaching in the direction for which...
49 CFR 236.823 - Switch, trailing point.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Switch, trailing point. 236.823 Section 236.823 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Switch, trailing point. A switch, the points of which face away from traffic approaching in the direction...
49 CFR 236.103 - Switch circuit controller or point detector.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Switch circuit controller or point detector. 236.103 Section 236.103 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL... controller or point detector. Switch circuit controller, circuit controller, or point detector operated by...
49 CFR 236.103 - Switch circuit controller or point detector.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Switch circuit controller or point detector. 236.103 Section 236.103 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL... controller or point detector. Switch circuit controller, circuit controller, or point detector operated by...
49 CFR 236.103 - Switch circuit controller or point detector.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 4 2012-10-01 2012-10-01 false Switch circuit controller or point detector. 236.103 Section 236.103 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL... controller or point detector. Switch circuit controller, circuit controller, or point detector operated by...
49 CFR 236.103 - Switch circuit controller or point detector.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Switch circuit controller or point detector. 236.103 Section 236.103 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL... controller or point detector. Switch circuit controller, circuit controller, or point detector operated by...
49 CFR 236.103 - Switch circuit controller or point detector.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Switch circuit controller or point detector. 236.103 Section 236.103 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL... controller or point detector. Switch circuit controller, circuit controller, or point detector operated by...
Damage Identification in Beam Structure using Spatial Continuous Wavelet Transform
NASA Astrophysics Data System (ADS)
Janeliukstis, R.; Rucevskis, S.; Wesolowski, M.; Kovalovs, A.; Chate, A.
2015-11-01
In this paper the applicability of spatial continuous wavelet transform (CWT) technique for damage identification in the beam structure is analyzed by application of different types of wavelet functions and scaling factors. The proposed method uses exclusively mode shape data from the damaged structure. To examine limitations of the method and to ascertain its sensitivity to noisy experimental data, several sets of simulated data are analyzed. Simulated test cases include numerical mode shapes corrupted by different levels of random noise as well as mode shapes with different number of measurement points used for wavelet transform. A broad comparison of ability of different wavelet functions to detect and locate damage in beam structure is given. Effectiveness and robustness of the proposed algorithms are demonstrated experimentally on two aluminum beams containing single mill-cut damage. The modal frequencies and the corresponding mode shapes are obtained via finite element models for numerical simulations and by using a scanning laser vibrometer with PZT actuator as vibration excitation source for the experimental study.
"Batch" kinetics in flow: online IR analysis and continuous control.
Moore, Jason S; Jensen, Klavs F
2014-01-07
Currently, kinetic data is either collected under steady-state conditions in flow or by generating time-series data in batch. Batch experiments are generally considered to be more suitable for the generation of kinetic data because of the ability to collect data from many time points in a single experiment. Now, a method that rapidly generates time-series reaction data from flow reactors by continuously manipulating the flow rate and reaction temperature has been developed. This approach makes use of inline IR analysis and an automated microreactor system, which allowed for rapid and tight control of the operating conditions. The conversion/residence time profiles at several temperatures were used to fit parameters to a kinetic model. This method requires significantly less time and a smaller amount of starting material compared to one-at-a-time flow experiments, and thus allows for the rapid generation of kinetic data. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pseudospectral collocation methods for fourth order differential equations
NASA Technical Reports Server (NTRS)
Malek, Alaeddin; Phillips, Timothy N.
1994-01-01
Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-01-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer. PMID:24829517
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-05-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.
Modelling vertical error in LiDAR-derived digital elevation models
NASA Astrophysics Data System (ADS)
Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.
2010-01-01
A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.
The generalized Lyapunov theorem and its application to quantum channels
NASA Astrophysics Data System (ADS)
Burgarth, Daniel; Giovannetti, Vittorio
2007-05-01
We give a simple and physically intuitive necessary and sufficient condition for a map acting on a compact metric space to be mixing (i.e. infinitely many applications of the map transfer any input into a fixed convergency point). This is a generalization of the 'Lyapunov direct method'. First we prove this theorem in topological spaces and for arbitrary continuous maps. Finally we apply our theorem to maps which are relevant in open quantum systems and quantum information, namely quantum channels. In this context, we also discuss the relations between mixing and ergodicity (i.e. the property that there exists only a single input state which is left invariant by a single application of the map) showing that the two are equivalent when the invariant point of the ergodic map is pure.
The Structure of Reclaiming Warehouse of Minerals at Open-Cut Mines with the Use Combined Transport
NASA Astrophysics Data System (ADS)
Ikonnikov, D. A.; Kovshov, S. V.
2017-07-01
In the article performed an analysis of ore reclaiming and overloading point characteristics at modern opencast mines. Ore reclaiming represents the most effective way of stability support of power-intensive and expensive technological dressing process, and, consequently, of maintenance of the optimal production and set-up parameters of extraction and quality of finished product. The paper proposed the construction of the warehouse describing the technology of its creation. Equipment used for the warehouse described in detail. All stages of development and operation was shown. Advantages and disadvantages of using mechanical shovel excavator and hydraulic excavator “backdigger” as a reloading and reclaiming equipment was compared. Ore reclaiming and overloading point construction at cyclical and continuous method of mining using a hydraulic excavator “backdigger” was proposed.
High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis
Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher
2015-01-01
Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87 m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2 cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.
Automatic generation of endocardial surface meshes with 1-to-1 correspondence from cine-MR images
NASA Astrophysics Data System (ADS)
Su, Yi; Teo, S.-K.; Lim, C. W.; Zhong, L.; Tan, R. S.
2015-03-01
In this work, we develop an automatic method to generate a set of 4D 1-to-1 corresponding surface meshes of the left ventricle (LV) endocardial surface which are motion registered over the whole cardiac cycle. These 4D meshes have 1- to-1 point correspondence over the entire set, and is suitable for advanced computational processing, such as shape analysis, motion analysis and finite element modelling. The inputs to the method are the set of 3D LV endocardial surface meshes of the different frames/phases of the cardiac cycle. Each of these meshes is reconstructed independently from border-delineated MR images and they have no correspondence in terms of number of vertices/points and mesh connectivity. To generate point correspondence, the first frame of the LV mesh model is used as a template to be matched to the shape of the meshes in the subsequent phases. There are two stages in the mesh correspondence process: (1) a coarse matching phase, and (2) a fine matching phase. In the coarse matching phase, an initial rough matching between the template and the target is achieved using a radial basis function (RBF) morphing process. The feature points on the template and target meshes are automatically identified using a 16-segment nomenclature of the LV. In the fine matching phase, a progressive mesh projection process is used to conform the rough estimate to fit the exact shape of the target. In addition, an optimization-based smoothing process is used to achieve superior mesh quality and continuous point motion.
Wang, Lixin; Caylor, Kelly K; Dragoni, Danilo
2009-02-01
The (18)O and (2)H of water vapor serve as powerful tracers of hydrological processes. The typical method for determining water vapor delta(18)O and delta(2)H involves cryogenic trapping and isotope ratio mass spectrometry. Even with recent technical advances, these methods cannot resolve vapor composition at high temporal resolutions. In recent years, a few groups have developed continuous laser absorption spectroscopy (LAS) approaches for measuring delta(18)O and delta(2)H which achieve accuracy levels similar to those of lab-based mass spectrometry methods. Unfortunately, most LAS systems need cryogenic cooling and constant calibration to a reference gas, and have substantial power requirements, making them unsuitable for long-term field deployment at remote field sites. A new method called Off-Axis Integrated Cavity Output Spectroscopy (OA-ICOS) has been developed which requires extremely low-energy consumption and neither reference gas nor cryogenic cooling. In this report, we develop a relatively simple pumping system coupled to a dew point generator to calibrate an ICOS-based instrument (Los Gatos Research Water Vapor Isotope Analyzer (WVIA) DLT-100) under various pressures using liquid water with known isotopic signatures. Results show that the WVIA can be successfully calibrated using this customized system for different pressure settings, which ensure that this instrument can be combined with other gas-sampling systems. The precisions of this instrument and the associated calibration method can reach approximately 0.08 per thousand for delta(18)O and approximately 0.4 per thousand for delta(2)H. Compared with conventional mass spectrometry and other LAS-based methods, the OA-ICOS technique provides a promising alternative tool for continuous water vapor isotopic measurements in field deployments. Copyright 2009 John Wiley & Sons, Ltd.
Bjerre, Lise M; Paterson, Nicholas R; McGowan, Jessie; Hogg, William; Campbell, Craig M; Viner, Gary; Archibald, Douglas
2013-01-01
Assessing physician needs to develop continuing medical education (CME) activities is an integral part of CME curriculum development. The purpose of the present study was to demonstrate the feasibility of identifying areas of perceived greatest needs for continuing medical education (CME) by using questions collected electronically at the point of care. This study is a secondary analysis of the "Just-in-Time" (JIT) information librarian consultation service database of questions using quantitative content analysis methods. The original JIT project demonstrated the feasibility of a real-time librarian service for answering questions asked by primary care clinicians at the point of care using a Web-based platform or handheld device. Data were collected from 88 primary care practitioners in Ontario, Canada, from October 2005 to April 2006. Questions were answered in less than 15 minutes, enabling clinicians to use the answer during patient encounters. Description of type and frequency of questions asked, including the organ system on which the questions focused, was produced using 2 classification systems, the "taxonomy of generic clinical questions" (TGCQ), and the International Classification for Primary Care version 2 (ICPC-2). Of the original 1889 questions, 1871 (99.0%) were suitable for analysis. A total of 970 (52%) of questions related to therapy; of these, 671 (69.2%) addressed questions about drug therapy, representing 36% of all questions. Questions related to diagnosis (24.8%) and epidemiology (13.5%) were also common. Organ systems questions concerning musculoskeletal, endocrine, skin, cardiac, and digestive systems were asked more than other categories. Questions collected at the point of care provide a valuable and unique source of information on the true learning needs of practicing clinicians. The TGCQ classification allowed us to show that a majority of questions had to do with treatment, particularly drug treatment, whereas the use of the ICPC-2 classification illustrated the great variety of questions asked about the diverse conditions encountered in primary care. It is feasible to use electronically collected questions asked by primary care clinicians in clinical practice to categorize self-identified knowledge and practice needs. This could be used to inform the development of future learning activities. Copyright © 2013 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on CME, Association for Hospital Medical Education.
Boruff, Jill T; Bilodeau, Edward
2012-01-01
Question: Can a mobile optimized subject guide facilitate medical student access to mobile point-of-care tools? Setting: The guide was created at a library at a research-intensive university with six teaching hospital sites. Objectives: The team created a guide facilitating medical student access to point-of-care tools directly on mobile devices to provide information allowing them to access and set up resources with little assistance. Methods: Two librarians designed a mobile optimized subject guide for medicine and conducted a survey to test its usefulness. Results: Web analytics and survey results demonstrate that the guide is used and the students are satisfied. Conclusion: The library will continue to use the subject guide as its primary means of supporting mobile devices. It remains to be seen if the mobile guide facilitates access for those who do not need assistance and want direct access to the resources. Internet access in the hospitals remains an issue. PMID:22272160
Evolutionary pattern search algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1995-09-19
This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimentalmore » analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.« less
Fast marching methods for the continuous traveling salesman problem.
Andrews, June; Sethian, J A
2007-01-23
We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points ("cities") in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the traveling salesman problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both a heuristic and an optimal solution to this problem. The complexity of the heuristic algorithm is at worst case M.N log N, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh.
Apparatus and method for routing a transmission line through a downhole tool
Hall, David R.; Hall, Jr., H. Tracy; Pixton, David S.; Briscoe, Michael; Reynolds, Jay
2006-07-04
A method for routing a transmission line through a tool joint having a primary and secondary shoulder, a central bore, and a longitudinal axis, includes drilling a straight channel, at a positive, nominal angle with respect to the longitudinal axis, through the tool joint from the secondary shoulder to a point proximate the inside wall of the centtral bore. The method further includes milling back, from within the central bore, a second channel to merge with the straight channel, thereby forming a continuous channel from the secondary shoulder to the central bore. In selected embodiments, drilling is accomplished by gun-drilling the straight channel. In other embodiments, the method includes tilting the tool joint before drilling to produce the positive, nominal angle. In selected embodiments, the positive, nominal angle is less than or equal to 15 degrees.
Comparison of four stable numerical methods for Abel's integral equation
NASA Technical Reports Server (NTRS)
Murio, Diego A.; Mejia, Carlos E.
1991-01-01
The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.
A higher order panel method for linearized supersonic flow
NASA Technical Reports Server (NTRS)
Ehlers, F. E.; Epton, M. A.; Johnson, F. T.; Magnus, A. E.; Rubbert, P. E.
1979-01-01
The basic integral equations of linearized supersonic theory for an advanced supersonic panel method are derived. Methods using only linear varying source strength over each panel or only quadratic doublet strength over each panel gave good agreement with analytic solutions over cones and zero thickness cambered wings. For three dimensional bodies and wings of general shape, combined source and doublet panels with interior boundary conditions to eliminate the internal perturbations lead to a stable method providing good agreement experiment. A panel system with all edges contiguous resulted from dividing the basic four point non-planar panel into eight triangular subpanels, and the doublet strength was made continuous at all edges by a quadratic distribution over each subpanel. Superinclined panels were developed and tested on s simple nacelle and on an airplane model having engine inlets, with excellent results.
The value of continuity: Refined isogeometric analysis and fast direct solvers
Garcia, Daniel; Pardo, David; Dalcin, Lisandro; ...
2016-08-24
Here, we propose the use of highly continuous finite element spaces interconnected with low continuity hyperplanes to maximize the performance of direct solvers. Starting from a highly continuous Isogeometric Analysis (IGA) discretization, we introduce C0-separators to reduce the interconnection between degrees of freedom in the mesh. By doing so, both the solution time and best approximation errors are simultaneously improved. We call the resulting method “refined Isogeometric Analysis (rIGA)”. To illustrate the impact of the continuity reduction, we analyze the number of Floating Point Operations (FLOPs), computational times, and memory required to solve the linear system obtained by discretizing themore » Laplace problem with structured meshes and uniform polynomial orders. Theoretical estimates demonstrate that an optimal continuity reduction may decrease the total computational time by a factor between p 2 and p 3, with pp being the polynomial order of the discretization. Numerical results indicate that our proposed refined isogeometric analysis delivers a speed-up factor proportional to p 2. In a 2D mesh with four million elements and p=5, the linear system resulting from rIGA is solved 22 times faster than the one from highly continuous IGA. In a 3D mesh with one million elements and p=3, the linear system is solved 15 times faster for the refined than the maximum continuity isogeometric analysis.« less
An annular superposition integral for axisymmetric radiators.
Kelly, James F; McGough, Robert J
2007-02-01
A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a "smooth piston" function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity.
POSTURAL CONTROL ASSESSMENT IN PHYSICALLY ACTIVE AND SEDENTARY INDIVIDUALS WITH PARAPLEGIA
Magnani, Paola Errera; Cliquet, Alberto; de Abreu, Daniela Cristina Carvalho
2017-01-01
ABSTRACT Objective: The aim of this study was to evaluate functional independence and trunk control during maximum-range tasks in individuals with spinal cord injuries, who were divided into sedentary (SSI, n=10) and physically active (PASI, n=10) groups . Methods: Anamnesis was conducted and level and type of injury were identified (according to the American Spinal Injury Association protocol, ASIA) and the Functional Independence Measure (FIM) questionnaire was applied. For the forward and lateral reach task, the subjects were instructed to reach as far as possible. Mean data were compared using the unpaired t test and Mann-Whitney test and differences were considered significant when p<0.05 . Results: The PASI group performed better in self-care activities (PASI: 40.8±0.42 points, SSI: 38.0±3.58 points, p=0.01), sphincter control (PASI: 10.5±1.84 points, SSI: 8.2±3.04 points, p=0.02), transfers (PASI: 20.7±0.48 points, SSI: 16.9±4.27 points, p=0.04), and total FIM score (PASI: 104.0±2.30 points, SSI 105.1±8.56 points, p=0.01). On the maximum reach task, the PASI group had a greater average range in all directions evaluated (p<0.05) . Conclusion: The continuous practice of exercise increased motor function independence and trunk control in individuals with complete spinal cord injury. Level of Evidence II, Prospective Comparative Study. PMID:28955171
NASA Astrophysics Data System (ADS)
Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.
2011-03-01
Presently, dynamic surface-based models are required to contain increasingly larger numbers of points and to propagate them over longer time periods. For large numbers of surface points, the octree data structure can be used as a balance between low memory occupation and relatively rapid access to the stored data. For evolution rules that depend on neighborhood states, extended simulation periods can be obtained by using simplified atomistic propagation models, such as the Cellular Automata (CA). This method, however, has an intrinsic parallel updating nature and the corresponding simulations are highly inefficient when performed on classical Central Processing Units (CPUs), which are designed for the sequential execution of tasks. In this paper, a series of guidelines is presented for the efficient adaptation of octree-based, CA simulations of complex, evolving surfaces into massively parallel computing hardware. A Graphics Processing Unit (GPU) is used as a cost-efficient example of the parallel architectures. For the actual simulations, we consider the surface propagation during anisotropic wet chemical etching of silicon as a computationally challenging process with a wide-spread use in microengineering applications. A continuous CA model that is intrinsically parallel in nature is used for the time evolution. Our study strongly indicates that parallel computations of dynamically evolving surfaces simulated using CA methods are significantly benefited by the incorporation of octrees as support data structures, substantially decreasing the overall computational time and memory usage.
NASA Astrophysics Data System (ADS)
Sun, M. L.; Peng, H. B.; Duan, B. H.; Liu, F. F.; Du, X.; Yuan, W.; Zhang, B. T.; Zhang, X. Y.; Wang, T. S.
2018-03-01
Borosilicate glass has potential application for vitrification of high-level radioactive waste, which attracts extensive interest in studying its radiation durability. In this study, sodium borosilicate glass samples were irradiated with 4 MeV Kr17+ ion, 5 MeV Xe26+ ion and 0.3 MeV P+ ion, respectively. The hardness of irradiated borosilicate glass samples was measured with nanoindentation in continuous stiffness mode and quasi continuous stiffness mode, separately. Extrapolation method, mean value method, squared extrapolation method and selected point method are used to obtain hardness of irradiated glass and a comparison among these four methods is conducted. The extrapolation method is suggested to analyze the hardness of ion irradiated glass. With increasing irradiation dose, the values of hardness for samples irradiated with Kr, Xe and P ions dropped and then saturated at 0.02 dpa. Besides, both the maximum variations and decay constants for three kinds of ions with different energies are similar indicates the similarity behind the hardness variation in glasses after irradiation. Furthermore, the hardness variation of low energy P ion irradiated samples whose range is much smaller than those of high energy Kr and Xe ions, has the same trend as that of Kr and Xe ions. It suggested that electronic energy loss did not play a significant role in hardness decrease for irradiation of low energy ions.
Jeddi, Fatemeh Rangraz; Akbari, Hossein; Rasouli, Somayeh
2017-06-01
Tele-homecare methods can be used to provide home care for the elderly, if information management is provided. The aim of this study was to compare the places and methods of the data collection and media that use Tele-homecare for the elderly in selected countries in 2015. A comparative-applied library study was conducted in 2015. The study population were five countries, including Canada, Australia, England, Denmark, and Taiwan. The data collection tool was a checklist based on the objectives of study. Persian and English papers from 1998 to 2014, related to the Electronic Health Record, home care and the elderly were extracted from authentic journals and reference books as well as academic and research websites. Data were collected by reviewing the papers. After collecting data, comparative tables were prepared and the weak and strong points of each case were investigated and analyzed in selected countries. Clinical, laboratory, imaging and pharmaceutical data were obtained from hospitals, physicians' offices, clinics, pharmacies and long-term healthcare centers. Mobile and tablet-based technologies and personal digital assistants were used to collect data. Data were published via Internet, online and offline databanks, data exchange and dissemination via registries and national databases. Managed care methods were telehealth management systems and point of service. For continuity of care, it is necessary to consider managed care and equipment with regard to obtaining data in various forms from various sources, sharing data with registries and national databanks as well as the Electronic Health Record. With regard to the emergence of wearable technology and its use in home care, it is suggested to study the integration of its data with Electronic Health Records.
NASA Astrophysics Data System (ADS)
Craymer, M.; White, D.; Piraszewski, M.; Zhao, Y.; Henton, J.; Silliker, J.; Samsonov, S.
2015-12-01
Aquistore is a demonstration project for the underground storage of CO2 at a depth of ~3350 m near Estevan, Saskatchewan, Canada. An objective of the project is to design, adapt, and test non-seismic monitoring methods that have not been systematically utilized to date for monitoring CO2 storage projects, and to integrate the data from these various monitoring tools to obtain quantitative estimates of the change in subsurface fluid distributions, pressure changes and associated surface deformation. Monitoring methods being applied include satellite-, surface- and wellbore-based monitoring systems and comprise natural- and controlled-source electromagnetic methods, gravity monitoring, continuous GPS, synthetic aperture radar interferometry (InSAR), tiltmeter array analysis, and chemical tracer studies. Here we focus on the GPS, InSAR and gravity monitoring. Five monitoring sites were installed in 2012 and another six in 2013, each including GPS and InSAR corner reflector monuments (some collocated on the same monument). The continuous GPS data from these stations have been processed on a daily basis in both baseline processing mode using the Bernese GPS Software and precise point positioning mode using CSRS-PPP. Gravity measurements at each site have also been performed in fall 2013, spring 2014 and fall 2015, and at two sites in fall 2014. InSAR measurements of deformation have been obtained for a 5 m footprint at each site as well as at the corner reflector point sources. Here we present the first results of this geodetic deformation monitoring after commencement of CO2 injection on April 14, 2015. The time series of these sites are examined, compared and analyzed with respect to monument stability, seasonal signals, longer term trends, and any changes in motion and mass since CO2 injection.
NASA Astrophysics Data System (ADS)
Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang
2017-12-01
Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.
NASA Astrophysics Data System (ADS)
Kolyaie, S.; Yaghooti, M.; Majidi, G.
2011-12-01
This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.
Li, Peng; Ji, Haoran; Wang, Chengshan; ...
2017-03-22
The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less
NASA Astrophysics Data System (ADS)
Yokoyama, Ryouta; Yagi, Shin-ichi; Tamura, Kiyoshi; Sato, Masakazu
2009-07-01
Ultrahigh speed dynamic elastography has promising potential capabilities in applying clinical diagnosis and therapy of living soft tissues. In order to realize the ultrahigh speed motion tracking at speeds of over thousand frames per second, synthetic aperture (SA) array signal processing technology must be introduced. Furthermore, the overall system performance should overcome the fine quantitative evaluation in accuracy and variance of echo phase changes distributed across a tissue medium. On spatial evaluation of local phase changes caused by pulsed excitation on a tissue phantom, investigation was made with the proposed SA signal system utilizing different virtual point sources that were generated by an array transducer to probe each component of local tissue displacement vectors. The final results derived from the cross-correlation method (CCM) brought about almost the same performance as obtained by the constrained least square method (LSM) extended to successive echo frames. These frames were reconstructed by SA processing after the real-time acquisition triggered by the pulsed irradiation from a point source. The continuous behavior of spatial motion vectors demonstrated the dynamic generation and traveling of the pulsed shear wave at a speed of one thousand frames per second.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Peng; Ji, Haoran; Wang, Chengshan
The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less
Automated search method for AFM and profilers
NASA Astrophysics Data System (ADS)
Ray, Michael; Martin, Yves C.
2001-08-01
A new automation software creates a search model as an initial setup and searches for a user-defined target in atomic force microscopes or stylus profilometers used in semiconductor manufacturing. The need for such automation has become critical in manufacturing lines. The new method starts with a survey map of a small area of a chip obtained from a chip-design database or an image of the area. The user interface requires a user to point to and define a precise location to be measured, and to select a macro function for an application such as line width or contact hole. The search algorithm automatically constructs a range of possible scan sequences within the survey, and provides increased speed and functionality compared to the methods used in instruments to date. Each sequence consists in a starting point relative to the target, a scan direction, and a scan length. The search algorithm stops when the location of a target is found and criteria for certainty in positioning is met. With today's capability in high speed processing and signal control, the tool can simultaneously scan and search for a target in a robotic and continuous manner. Examples are given that illustrate the key concepts.
NASA Astrophysics Data System (ADS)
Wilczek, Frank
Introduction Symmetry and the Phenomena of QCD Apparent and Actual Symmetries Asymptotic Freedom Confinement Chiral Symmetry Breaking Chiral Anomalies and Instantons High Temperature QCD: Asymptotic Properties Significance of High Temperature QCD Numerical Indications for Quasi-Free Behavior Ideas About Quark-Gluon Plasma Screening Versus Confinement Models of Chiral Symmetry Breaking More Refined Numerical Experiments High-Temperature QCD: Phase Transitions Yoga of Phase Transitions and Order Parameters Application to Glue Theories Application to Chiral Transitions Close Up on Two Flavors A Genuine Critical Point! (?) High-Density QCD: Methods Hopes, Doubts, and Fruition Another Renormalization Group Pairing Theory Taming the Magnetic Singularity High-Density QCD: Color-Flavor Locking and Quark-Hadron Continuity Gauge Symmetry (Non)Breaking Symmetry Accounting Elementary Excitations A Modified Photon Quark-Hadron Continuity Remembrance of Things Past More Quarks Fewer Quarks and Reality
On the stability, storage capacity, and design of nonlinear continuous neural networks
NASA Technical Reports Server (NTRS)
Guez, Allon; Protopopsecu, Vladimir; Barhen, Jacob
1988-01-01
The stability, capacity, and design of a nonlinear continuous neural network are analyzed. Sufficient conditions for existence and asymptotic stability of the network's equilibria are reduced to a set of piecewise-linear inequality relations that can be solved by a feedforward binary network, or by methods such as Fourier elimination. The stability and capacity of the network is characterized by the post synaptic firing rate function. An N-neuron network with sigmoidal firing function is shown to have up to 3N equilibrium points. This offers a higher capacity than the (0.1-0.2)N obtained in the binary Hopfield network. Moreover, it is shown that by a proper selection of the postsynaptic firing rate function, one can significantly extend the capacity storage of the network.
NASA Technical Reports Server (NTRS)
Clapp, Brian R.; Sills, Joel W., Jr.; Voorhees, Carl R.; Griffin, Thomas J. (Technical Monitor)
2002-01-01
The Vibration Admittance Test (VET) was performed to measure the emitted disturbances of the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cryogenic Cooler (NCC) in preparation for NCC installation onboard the Hubble Space Telescope (HST) during Servicing Mission 3B (SM3B). Details of the VET ground-test are described, including facility characteristics, sensor complement and configuration, NCC suspension, and background noise measurements. Kinematic equations used to compute NCC mass center displacements and accelerations from raw measurements are presented, and dynamic equations of motion for the NCC VET system are developed and verified using modal test data. A MIMO linear frequency-domain analysis method is used to compute NCC-induced loads and HST boresight jitter from VET measurements. These results are verified by a nonlinear time-domain analysis approach using a high-fidelity structural dynamics and pointing control simulation for HST. NCC emitted acceleration levels not exceeding 35 micro-g rms were measured in the VET and analysis methods herein predict 3.1 milli-areseconds rms jitter for HST on-orbit. Because the NCC is predicted to become the predominant disturbance source for HST, VET results indicate that HST will continue to meet the 7 milli-arcsecond pointing stability mission requirement in the post-SM3B era.
New convergence results for the scaled gradient projection method
NASA Astrophysics Data System (ADS)
Bonettini, S.; Prato, M.
2015-09-01
The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak convergence theorem is provided establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the {O}(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view.
"Simulated molecular evolution" or computer-generated artifacts?
Darius, F; Rojas, R
1994-11-01
1. The authors define a function with value 1 for the positive examples and 0 for the negative ones. They fit a continuous function but do not deal at all with the error margin of the fit, which is almost as large as the function values they compute. 2. The term "quality" for the value of the fitted function gives the impression that some biological significance is associated with values of the fitted function strictly between 0 and 1, but there is no justification for this kind of interpretation and finding the point where the fit achieves its maximum does not make sense. 3. By neglecting the error margin the authors try to optimize the fitted function using differences in the second, third, fourth, and even fifth decimal place which have no statistical significance. 4. Even if such a fit could profit from more data points, the authors should first prove that the region of interest has some kind of smoothness, that is, that a continuous fit makes any sense at all. 5. "Simulated molecular evolution" is a misnomer. We are dealing here with random search. Since the margin of error is so large, the fitted function does not provide statistically significant information about the points in search space where strings with cleavage sites could be found. This implies that the method is a highly unreliable stochastic search in the space of strings, even if the neural network is capable of learning some simple correlations. 6. Classical statistical methods are for these kind of problems with so few data points clearly superior to the neural networks used as a "black box" by the authors, which in the way they are structured provide a model with an error margin as large as the numbers being computed.7. And finally, even if someone would provide us with a function which separates strings with cleavage sites from strings without them perfectly, so-called simulated molecular evolution would not be better than random selection.Since a perfect fit would only produce exactly ones or zeros,starting a search in a region of space where all strings in the neighborhood get the value zero would not provide any kind of directional information for new iterations. We would just skip from one point to the other in a typical random walk manner.
Tricritical points in a Vicsek model of self-propelled particles with bounded confidence
NASA Astrophysics Data System (ADS)
Romensky, Maksym; Lobaskin, Vladimir; Ihle, Thomas
2014-12-01
We study the orientational ordering in systems of self-propelled particles with selective interactions. To introduce the selectivity we augment the standard Vicsek model with a bounded-confidence collision rule: a given particle only aligns to neighbors who have directions quite similar to its own. Neighbors whose directions deviate more than a fixed restriction angle α are ignored. The collective dynamics of this system is studied by agent-based simulations and kinetic mean-field theory. We demonstrate that the reduction of the restriction angle leads to a critical noise amplitude decreasing monotonically with that angle, turning into a power law with exponent 3/2 for small angles. Moreover, for small system sizes we show that upon decreasing the restriction angle, the kind of the transition to polar collective motion changes from continuous to discontinuous. Thus, an apparent tricritical point with different scaling laws is identified and calculated analytically. We investigate the shifting and vanishing of this point due to the formation of density bands as the system size is increased. Agent-based simulations in small systems with large particle velocities show excellent agreement with the kinetic theory predictions. We also find that at very small interaction angles, the polar ordered phase becomes unstable with respect to the apolar phase. We derive analytical expressions for the dependence of the threshold noise on the restriction angle. We show that the mean-field kinetic theory also permits stationary nematic states below a restriction angle of 0.681 π . We calculate the critical noise, at which the disordered state bifurcates to a nematic state, and find that it is always smaller than the threshold noise for the transition from disorder to polar order. The disordered-nematic transition features two tricritical points: At low and high restriction angle, the transition is discontinuous but continuous at intermediate α . We generalize our results to systems that show fragmentation into more than two groups and obtain scaling laws for the transition lines and the corresponding tricritical points. A numerical method to evaluate the nonlinear Fredholm integral equation for the stationary distribution function is also presented. This method is shown to give excellent agreement with agent-based simulations, even in strongly ordered systems at noise values close to zero.
NASA Astrophysics Data System (ADS)
Zou, Yanbiao; Chen, Tao
2018-06-01
To address the problem of low welding precision caused by the poor real-time tracking performance of common welding robots, a novel seam tracking system with excellent real-time tracking performance and high accuracy is designed based on the morphological image processing method and continuous convolution operator tracker (CCOT) object tracking algorithm. The system consists of a six-axis welding robot, a line laser sensor, and an industrial computer. This work also studies the measurement principle involved in the designed system. Through the CCOT algorithm, the weld feature points are determined in real time from the noise image during the welding process, and the 3D coordinate values of these points are obtained according to the measurement principle to control the movement of the robot and the torch in real time. Experimental results show that the sensor has a frequency of 50 Hz. The welding torch runs smoothly with a strong arc light and splash interference. Tracking error can reach ±0.2 mm, and the minimal distance between the laser stripe and the welding molten pool can reach 15 mm, which can significantly fulfill actual welding requirements.
FDDO and DSMC analyses of rarefied gas flow through 2D nozzles
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.
1992-01-01
Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas expanding through a two-dimensional nozzle and into a surrounding low-density environment. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, the variable hard sphere model is used as a molecular model and the no time counter method is employed as a collision sampling technique. The results of both the FDDO and the DSMC methods show good agreement. The FDDO method requires less computational effort than the DSMC method by factors of 10 to 40 in CPU time, depending on the degree of rarefaction.
Survival analysis in hematologic malignancies: recommendations for clinicians
Delgado, Julio; Pereira, Arturo; Villamor, Neus; López-Guillermo, Armando; Rozman, Ciril
2014-01-01
The widespread availability of statistical packages has undoubtedly helped hematologists worldwide in the analysis of their data, but has also led to the inappropriate use of statistical methods. In this article, we review some basic concepts of survival analysis and also make recommendations about how and when to perform each particular test using SPSS, Stata and R. In particular, we describe a simple way of defining cut-off points for continuous variables and the appropriate and inappropriate uses of the Kaplan-Meier method and Cox proportional hazard regression models. We also provide practical advice on how to check the proportional hazards assumption and briefly review the role of relative survival and multiple imputation. PMID:25176982
A potential method for lift evaluation from velocity field data
NASA Astrophysics Data System (ADS)
de Guyon-Crozier, Guillaume; Mulleners, Karen
2017-11-01
Computing forces from velocity field measurements is one of the challenges in experimental aerodynamics. This work focuses on low Reynolds flows, where the dynamics of the leading and trailing edge vortices play a major role in lift production. Recent developments in 2D potential flow theory, using discrete vortex models, have shown good results for unsteady wing motions. A method is presented to calculate lift from experimental velocity field data using a discrete vortex potential flow model. The model continuously adds new point vortices at leading and trailing edges whose circulations are set directly from vorticity measurements. Forces are computed using the unsteady Blasius equation and compared with measured loads.
Using foresight methods to anticipate future threats: the case of disease management.
Ma, Sai; Seid, Michael
2006-01-01
We describe a unique foresight framework for health care managers to use in longer-term planning. This framework uses scenario-building to envision plausible alternate futures of the U.S. health care system and links those broad futures to business-model-specific "load-bearing" assumptions. Because the framework we describe simultaneously addresses very broad and very specific issues, it can be easily applied to a broad range of health care issues by using the broad framework and business-specific assumptions for the particular case at hand. We illustrate this method using the case of disease management, pointing out that although the industry continues to grow rapidly, its future also contains great uncertainties.
47 CFR 90.463 - Transmitter control points.
Code of Federal Regulations, 2011 CFR
2011-10-01
... any dispatch point being supervised. (e) Where the system is interconnected with public communication..., (2) To terminate any transmission(s) or communication(s) between points in the public communications....463 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES...
47 CFR 90.463 - Transmitter control points.
Code of Federal Regulations, 2014 CFR
2014-10-01
... any dispatch point being supervised. (e) Where the system is interconnected with public communication..., (2) To terminate any transmission(s) or communication(s) between points in the public communications....463 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES...
47 CFR 90.463 - Transmitter control points.
Code of Federal Regulations, 2012 CFR
2012-10-01
... any dispatch point being supervised. (e) Where the system is interconnected with public communication..., (2) To terminate any transmission(s) or communication(s) between points in the public communications....463 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES...
47 CFR 90.463 - Transmitter control points.
Code of Federal Regulations, 2013 CFR
2013-10-01
... any dispatch point being supervised. (e) Where the system is interconnected with public communication..., (2) To terminate any transmission(s) or communication(s) between points in the public communications....463 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Common Carrier Fixed Point-to-Point Microwave Service § 101.701 Eligibility. (a) Authorizations... the customers (or points of service) on the microwave system involved, including those served through...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Common Carrier Fixed Point-to-Point Microwave Service § 101.701 Eligibility. (a) Authorizations... the customers (or points of service) on the microwave system involved, including those served through...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Common Carrier Fixed Point-to-Point Microwave Service § 101.701 Eligibility. (a) Authorizations... the customers (or points of service) on the microwave system involved, including those served through...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Common Carrier Fixed Point-to-Point Microwave Service § 101.701 Eligibility. (a) Authorizations... the customers (or points of service) on the microwave system involved, including those served through...
Business continuity 2014: From traditional to integrated Business Continuity Management.
Ee, Henry
As global change continues to generate new challenges and potential threats to businesses, traditional business continuity management (BCM) slowly reveals its limitations and weak points to ensuring 'business resiliency' today. Consequently, BCM professionals also face the challenge of re-evaluating traditional concepts and introducing new strategies and industry best practices. This paper points to why traditional BCM is no longer sufficient in terms of enabling businesses to survive in today's high-risk environment. It also looks into some of the misconceptions about BCM and other stumbling blocks to establishing effective BCM today. Most importantly, however, this paper provides tips based on the Business Continuity Institute's (BCI) Good Practices Guideline (GPG) and the latest international BCM standard ISO 22301 on how to overcome the issues and challenges presented.