Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?
Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D
2018-02-01
Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Speciation and Determination of Low Concentration of Iron in Beer Samples by Cloud Point Extraction
ERIC Educational Resources Information Center
Khalafi, Lida; Doolittle, Pamela; Wright, John
2018-01-01
A laboratory experiment is described in which students determine the concentration and speciation of iron in beer samples using cloud point extraction and absorbance spectroscopy. The basis of determination is the complexation between iron and 2-(5-bromo-2- pyridylazo)-5-diethylaminophenol (5-Br-PADAP) as a colorimetric reagent in an aqueous…
Point prevalence of complex wounds in a defined United Kingdom population.
Hall, Jill; Buckley, Hannah L; Lamb, Karen A; Stubbs, Nikki; Saramago, Pedro; Dumville, Jo C; Cullum, Nicky A
2014-01-01
Complex wounds (superficial-, partial-, or full-thickness skin loss wounds healing by secondary intention) are common; however, there is a lack of high-quality, contemporary epidemiological data. This paper presents point prevalence estimates for complex wounds overall as well as for individual types. A multiservice, cross-sectional survey was undertaken across a United Kingdom city (Leeds, population 751,485) during 2 weeks in spring of 2011. The mean age of people with complex wounds was approximately 70 years, standard deviation 19.41. The point prevalence of complex wounds was 1.47 per 1,000 of the population, 95% confidence interval 1.38 to 1.56. While pressure ulcers and leg ulcers were the most frequent, one in five people in the sample population had a less common wound type. Surveys confined to people with specific types of wound would underestimate the overall impact of complex wounds on the population and health care resources. © 2014 The Authors. Wound Repair and Regeneration published by Wiley Periodicals, Inc. on behalf of Wound Healing Society.
Nogueiras, Gloria; Kunnen, E. Saskia; Iborra, Alejandro
2017-01-01
This study adopts a dynamic systems approach to investigate how individuals successfully manage contextual complexity. To that end, we tracked individuals' emotional trajectories during a challenging training course, seeking qualitative changes–turning points—and we tested their relationship with the perceived complexity of the training. The research context was a 5-day higher education course based on process-oriented experiential learning, and the sample consisted of 17 students. The students used a five-point Likert scale to rate the intensity of 16 emotions and the complexity of the training on 8 measurement points. Monte Carlo permutation tests enabled to identify 30 turning points in the 272 emotional trajectories analyzed (17 students * 16 emotions each). 83% of the turning points indicated a change of pattern in the emotional trajectories that consisted of: (a) increasingly intense positive emotions or (b) decreasingly intense negative emotions. These turning points also coincided with particularly complex periods in the training as perceived by the participants (p = 0.003, and p = 0.001 respectively). The relationship between positively-trended turning points in the students' emotional trajectories and the complexity of the training may be interpreted as evidence of a successful management of the cognitive conflict arising from the clash between the students' prior ways of meaning-making and the challenging demands of the training. One of the strengths of this study is that it provides a relatively simple procedure for identifying turning points in developmental trajectories, which can be applied to various longitudinal experiences that are very common in educational and developmental contexts. Additionally, the findings contribute to sustaining that the assumption that complex contextual demands lead unfailingly to individuals' learning is incomplete. Instead, it is how individuals manage complexity which may or may not lead to learning. Finally, this study can also be considered a first step in research on the developmental potential of process-oriented experiential learning training. PMID:28515703
NASA Astrophysics Data System (ADS)
Arain, Salma Aslam; Kazi, Tasneem G.; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal
2014-12-01
An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu2+) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu2+ using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046 μg L-1 and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu2+ in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu2+ in serum samples of different viral hepatitis patients and healthy controls.
Arain, Salma Aslam; Kazi, Tasneem G; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal
2014-12-10
An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu(2+)) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu(2+) using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046μgL(-1) and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu(2+) in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu(2+) in serum samples of different viral hepatitis patients and healthy controls. Copyright © 2014 Elsevier B.V. All rights reserved.
Fast and Robust STEM Reconstruction in Complex Environments Using Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Wang, D.; Hollaus, M.; Puttonen, E.; Pfeifer, N.
2016-06-01
Terrestrial Laser Scanning (TLS) is an effective tool in forest research and management. However, accurate estimation of tree parameters still remains challenging in complex forests. In this paper, we present a novel algorithm for stem modeling in complex environments. This method does not require accurate delineation of stem points from the original point cloud. The stem reconstruction features a self-adaptive cylinder growing scheme. This algorithm is tested for a landslide region in the federal state of Vorarlberg, Austria. The algorithm results are compared with field reference data, which show that our algorithm is able to accurately retrieve the diameter at breast height (DBH) with a root mean square error (RMSE) of ~1.9 cm. This algorithm is further facilitated by applying an advanced sampling technique. Different sampling rates are applied and tested. It is found that a sampling rate of 7.5% is already able to retain the stem fitting quality and simultaneously reduce the computation time significantly by ~88%.
NASA Astrophysics Data System (ADS)
Wu, Peng; Zhang, Yunchang; Lv, Yi; Hou, Xiandeng
2006-12-01
A simple, low cost and highly sensitive method based on cloud point extraction (CPE) for separation/preconcentration and thermospray flame quartz furnace atomic absorption spectrometry was proposed for the determination of ultratrace cadmium in water and urine samples. The analytical procedure involved the formation of analyte-entrapped surfactant micelles by mixing the analyte solution with an ammonium pyrrolidinedithiocarbamate (APDC) solution and a Triton X-114 solution. When the temperature of the system was higher than the cloud point of Triton X-114, the complex of cadmium-PDC entered the surfactant-rich phase and thus separation of the analyte from the matrix was achieved. Under optimal chemical and instrumental conditions, the limit of detection was 0.04 μg/L for cadmium with a sample volume of 10 mL. The analytical results of cadmium in water and urine samples agreed well with those by ICP-MS.
Naeemullah; Kazi, Tasneem G; Shah, Faheem; Afridi, Hassan I; Baig, Jameel Ahmed; Soomro, Abdul Sattar
2013-01-01
A simple method for the preconcentration of cadmium (Cd) and nickel (Ni) in drinking and wastewater samples was developed. Cloud point extraction has been used for the preconcentration of both metals, after formation of complexes with 8-hydroxyquinoline (8-HQ) and extraction with the surfactant octylphenoxypolyethoxyethanol (Triton X-114). Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the Cd and Ni contents were measured by flame atomic absorption spectrometry. The experimental variables, such as pH, amounts of reagents (8-HQ and Triton X-114), temperature, incubation time, and sample volume, were optimized. After optimization of the complexation and extraction conditions, enhancement factors of 80 and 61, with LOD values of 0.22 and 0.52 microg/L, were obtained for Cd and Ni, respectively. The proposed method was applied satisfactorily for the determination of both elements in drinking and wastewater samples.
The influence of point defects on the thermal conductivity of AlN crystals
NASA Astrophysics Data System (ADS)
Rounds, Robert; Sarkar, Biplab; Alden, Dorian; Guo, Qiang; Klump, Andrew; Hartmann, Carsten; Nagashima, Toru; Kirste, Ronny; Franke, Alexander; Bickermann, Matthias; Kumagai, Yoshinao; Sitar, Zlatko; Collazo, Ramón
2018-05-01
The average bulk thermal conductivity of free-standing physical vapor transport and hydride vapor phase epitaxy single crystal AlN samples with different impurity concentrations is analyzed using the 3ω method in the temperature range of 30-325 K. AlN wafers grown by physical vapor transport show significant variation in thermal conductivity at room temperature with values ranging between 268 W/m K and 339 W/m K. AlN crystals grown by hydride vapor phase epitaxy yield values between 298 W/m K and 341 W/m K at room temperature, suggesting that the same fundamental mechanisms limit the thermal conductivity of AlN grown by both techniques. All samples in this work show phonon resonance behavior resulting from incorporated point defects. Samples shown by optical analysis to contain carbon-silicon complexes exhibit higher thermal conductivity above 100 K. Phonon scattering by point defects is determined to be the main limiting factor for thermal conductivity of AlN within the investigated temperature range.
NASA Astrophysics Data System (ADS)
Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing
Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.
Chen, Li-ding; Peng, Hong-jia; Fu, Bo-Jie; Qiu, Jun; Zhang, Shu-rong
2005-01-01
Surface waters can be contaminated by human activities in two ways: (1) by point sources, such as sewage treatment discharge and storm-water runoff; and (2) by non-point sources, such as runoff from urban and agricultural areas. With point-source pollution effectively controlled, non-point source pollution has become the most important environmental concern in the world. The formation of non-point source pollution is related to both the sources such as soil nutrient, the amount of fertilizer and pesticide applied, the amount of refuse, and the spatial complex combination of land uses within a heterogeneous landscape. Land-use change, dominated by human activities, has a significant impact on water resources and quality. In this study, fifteen surface water monitoring points in the Yuqiao Reservoir Basin, Zunhua, Hebei Province, northern China, were chosen to study the seasonal variation of nitrogen concentration in the surface water. Water samples were collected in low-flow period (June), high-flow period (July) and mean-flow period (October) from 1999 to 2000. The results indicated that the seasonal variation of nitrogen concentration in the surface water among the fifteen monitoring points in the rainfall-rich year is more complex than that in the rainfall-deficit year. It was found that the land use, the characteristics of the surface river system, rainfall, and human activities play an important role in the seasonal variation of N-concentration in surface water.
NASA Astrophysics Data System (ADS)
Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang
2016-09-01
Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.
Filtering Airborne LIDAR Data by AN Improved Morphological Method Based on Multi-Gradient Analysis
NASA Astrophysics Data System (ADS)
Li, Y.
2013-05-01
The technology of airborne Light Detection And Ranging (LIDAR) is capable of acquiring dense and accurate 3D geospatial data. Although many related efforts have been made by a lot of researchers in the last few years, LIDAR data filtering is still a challenging task, especially for area with high relief or hybrid geographic features. In order to address the bare-ground extraction from LIDAR point clouds of complex landscapes, a novel morphological filtering algorithm is proposed based on multi-gradient analysis in terms of the characteristic of LIDAR data distribution in this paper. Firstly, point clouds are organized by an index mesh. Then, the multigradient of each point is calculated using the morphological method. And, objects are removed gradually by choosing some points to carry on an improved opening operation constrained by multi-gradient iteratively. 15 sample data provided by ISPRS Working Group III/3 are employed to test the filtering algorithm proposed. These sample data include those environments that may lead to filtering difficulty. Experimental results show that filtering algorithm proposed by this paper is of high adaptability to various scenes including urban and rural areas. Omission error, commission error and total error can be simultaneously controlled in a relatively small interval. This algorithm can efficiently remove object points while preserves ground points to a great degree.
Tiwari, Swapnil; Deb, Manas Kanti; Sen, Bhupendra K
2017-04-15
A new cloud point extraction (CPE) method for the determination of hexavalent chromium i.e. Cr(VI) in food samples is established with subsequent diffuse reflectance-Fourier transform infrared (DRS-FTIR) analysis. The method demonstrates enrichment of Cr(VI) after its complexation with 1,5-diphenylcarbazide. The reddish-violet complex formed showed λ max at 540nm. Micellar phase separation at cloud point temperature of non-ionic surfactant, Triton X-100 occurred and complex was entrapped in surfactant and analyzed using DRS-FTIR. Under optimized conditions, the limit of detection (LOD) and quantification (LOQ) were 1.22 and 4.02μgmL -1 , respectively. Excellent linearity with correlation coefficient value of 0.94 was found for the concentration range of 1-100μgmL -1 . At 10μgmL -1 the standard deviation for 7 replicate measurements was found to be 0.11μgmL -1 . The method was successfully applied to commercially marketed food stuffs, and good recoveries (81-112%) were obtained by spiking the real samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Galbeiro, Rafaela; Garcia, Samara; Gaubeur, Ivanise
2014-04-01
Cloud point extraction (CPE) was used to simultaneously preconcentrate trace-level cadmium, nickel and zinc for determination by flame atomic absorption spectrometry (FAAS). 1-(2-Pyridilazo)-2-naphthol (PAN) was used as a complexing agent, and the metal complexes were extracted from the aqueous phase by the surfactant Triton X-114 ((1,1,3,3-tetramethylbutyl)phenyl-polyethylene glycol). Under optimized complexation and extraction conditions, the limits of detection were 0.37μgL(-1) (Cd), 2.6μgL(-1) (Ni) and 2.3μgL(-1) (Zn). This extraction was quantitative with a preconcentration factor of 30 and enrichment factor estimated to be 42, 40 and 43, respectively. The method was applied to different complex samples, and the accuracy was evaluated by analyzing a water standard reference material (NIST SRM 1643e), yielding results in agreement with the certified values. Copyright © 2013 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Haider, Shahid A.; Tran, Megan Y.; Wong, Alexander
2018-02-01
Observing the circular dichroism (CD) caused by organic molecules in biological fluids can provide powerful indicators of patient health and provide diagnostic clues for treatment. Methods for this kind of analysis involve tabletop devices that weigh tens of kilograms with costs on the order of tens of thousands of dollars, making them prohibitive in point-of-care diagnostic applications. In an e ort to reduce the size, cost, and complexity of CD estimation systems for point-of-care diagnostics, we propose a novel method for CD estimation that leverages a vortex half-wave retarder in between two linear polarizers and a two-dimensional photodetector array to provide an overall complexity reduction in the system. This enables the measurement of polarization variations across multiple polarizations after they interact with a biological sample, simultaneously, without the need for mechanical actuation. We further discuss design considerations of this methodology in the context of practical applications to point-of-care diagnostics.
Hyperspectral microscopic imaging by multiplex coherent anti-Stokes Raman scattering (CARS)
NASA Astrophysics Data System (ADS)
Khmaladze, Alexander; Jasensky, Joshua; Zhang, Chi; Han, Xiaofeng; Ding, Jun; Seeley, Emily; Liu, Xinran; Smith, Gary D.; Chen, Zhan
2011-10-01
Coherent anti-Stokes Raman scattering (CARS) microscopy is a powerful technique to image the chemical composition of complex samples in biophysics, biology and materials science. CARS is a four-wave mixing process. The application of a spectrally narrow pump beam and a spectrally wide Stokes beam excites multiple Raman transitions, which are probed by a probe beam. This generates a coherent directional CARS signal with several orders of magnitude higher intensity relative to spontaneous Raman scattering. Recent advances in the development of ultrafast lasers, as well as photonic crystal fibers (PCF), enable multiplex CARS. In this study, we employed two scanning imaging methods. In one, the detection is performed by a photo-multiplier tube (PMT) attached to the spectrometer. The acquisition of a series of images, while tuning the wavelengths between images, allows for subsequent reconstruction of spectra at each image point. The second method detects CARS spectrum in each point by a cooled coupled charged detector (CCD) camera. Coupled with point-by-point scanning, it allows for a hyperspectral microscopic imaging. We applied this CARS imaging system to study biological samples such as oocytes.
Detection of image structures using the Fisher information and the Rao metric.
Maybank, Stephen J
2004-12-01
In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.
Path optimization method for the sign problem
NASA Astrophysics Data System (ADS)
Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji
2018-03-01
We propose a path optimization method (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by optimizing f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.
Floyd A. Johnson
1961-01-01
This report assumes a knowledge of the principles of point sampling as described by Grosenbaugh, Bell and Alexander, and others. Whenever trees are counted at every point in a sample of points (large sample) and measured for volume at a portion (small sample) of these points, the sampling design could be called ratio double sampling. If the large...
NASA Astrophysics Data System (ADS)
Kassem, Mohammed A.; Amin, Alaa S.
2015-02-01
A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4‧-nitro-2‧,6‧-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50 °C, the surfactant-rich phase was heated again at 100 °C to remove water after decantation and the remaining phase was dissolved using 0.5 mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75 ng mL-1 and the detection limit was 0.15 ng mL-1 of the original solution. The enhancement factor of 500 was achieved for 250 mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples.
NASA Astrophysics Data System (ADS)
Owers, Christopher J.; Rogers, Kerrylee; Woodroffe, Colin D.
2018-05-01
Above-ground biomass represents a small yet significant contributor to carbon storage in coastal wetlands. Despite this, above-ground biomass is often poorly quantified, particularly in areas where vegetation structure is complex. Traditional methods for providing accurate estimates involve harvesting vegetation to develop mangrove allometric equations and quantify saltmarsh biomass in quadrats. However broad scale application of these methods may not capture structural variability in vegetation resulting in a loss of detail and estimates with considerable uncertainty. Terrestrial laser scanning (TLS) collects high resolution three-dimensional point clouds capable of providing detailed structural morphology of vegetation. This study demonstrates that TLS is a suitable non-destructive method for estimating biomass of structurally complex coastal wetland vegetation. We compare volumetric models, 3-D surface reconstruction and rasterised volume, and point cloud elevation histogram modelling techniques to estimate biomass. Our results show that current volumetric modelling approaches for estimating TLS-derived biomass are comparable to traditional mangrove allometrics and saltmarsh harvesting. However, volumetric modelling approaches oversimplify vegetation structure by under-utilising the large amount of structural information provided by the point cloud. The point cloud elevation histogram model presented in this study, as an alternative to volumetric modelling, utilises all of the information within the point cloud, as opposed to sub-sampling based on specific criteria. This method is simple but highly effective for both mangrove (r2 = 0.95) and saltmarsh (r2 > 0.92) vegetation. Our results provide evidence that application of TLS in coastal wetlands is an effective non-destructive method to accurately quantify biomass for structurally complex vegetation.
Direct sampling for stand density index
Mark J. Ducey; Harry T. Valentine
2008-01-01
A direct method of estimating stand density index in the field, without complex calculations, would be useful in a variety of silvicultural situations. We present just such a method. The approach uses an ordinary prism or other angle gauge, but it involves deliberately "pushing the point" or, in some cases, "pulling the point." This adjusts the...
Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran
2010-05-01
Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping.
Xu, Zhanfeng; Bunker, Christopher E; Harrington, Peter de B
2010-11-01
Monitoring the changes of jet fuel physical properties is important because fuel used in high-performance aircraft must meet rigorous specifications. Near-infrared (NIR) spectroscopy is a fast method to characterize fuels. Because of the complexity of NIR spectral data, chemometric techniques are used to extract relevant information from spectral data to accurately classify physical properties of complex fuel samples. In this work, discrimination of fuel types and classification of flash point, freezing point, boiling point (10%, v/v), boiling point (50%, v/v), and boiling point (90%, v/v) of jet fuels (JP-5, JP-8, Jet A, and Jet A1) were investigated. Each physical property was divided into three classes, low, medium, and high ranges, using two evaluations with different class boundary definitions. The class boundaries function as the threshold to alarm when the fuel properties change. Optimal partial least squares discriminant analysis (oPLS-DA), fuzzy rule-building expert system (FuRES), and support vector machines (SVM) were used to build the calibration models between the NIR spectra and classes of physical property of jet fuels. OPLS-DA, FuRES, and SVM were compared with respect to prediction accuracy. The validation of the calibration model was conducted by applying bootstrap Latin partition (BLP), which gives a measure of precision. Prediction accuracy of 97 ± 2% of the flash point, 94 ± 2% of freezing point, 99 ± 1% of the boiling point (10%, v/v), 98 ± 2% of the boiling point (50%, v/v), and 96 ± 1% of the boiling point (90%, v/v) were obtained by FuRES in one boundaries definition. Both FuRES and SVM obtained statistically better prediction accuracy over those obtained by oPLS-DA. The results indicate that combined with chemometric classifiers NIR spectroscopy could be a fast method to monitor the changes of jet fuel physical properties.
A Scanning Quantum Cryogenic Atom Microscope
NASA Astrophysics Data System (ADS)
Lev, Benjamin
Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity, high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented DC-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (2 um), or 6 nT / Hz1 / 2 per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly one-hundred points with an effective field sensitivity of 600 pT / Hz1 / 2 each point during the same time as a point-by-point scanner would measure these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly two orders of magnitude improvement in magnetic flux sensitivity (down to 10- 6 Phi0 / Hz1 / 2) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are for the first time carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns and done so using samples that may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge transport images at temperatures from room to \\x9D4K in unconventional superconductors and topologically nontrivial materials.
Scanning Quantum Cryogenic Atom Microscope
NASA Astrophysics Data System (ADS)
Yang, Fan; Kollár, Alicia J.; Taylor, Stephen F.; Turner, Richard W.; Lev, Benjamin L.
2017-03-01
Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed-matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented dc-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (approximately 2 μ m ) or 6 nT /√{Hz } per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly 100 points with an effective field sensitivity of 600 pT /√{Hz } for each point during the same time as a point-by-point scanner measures these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly 2 orders of magnitude improvement in magnetic flux sensitivity (down to 10-6 Φ0/√{Hz } ) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns in a system where samples may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge-transport images at temperatures from room temperature to 4 K in unconventional superconductors and topologically nontrivial materials.
NASA Astrophysics Data System (ADS)
Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw
2012-12-01
We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.
Sun, Mei; Wu, Qianghua
2010-04-15
A cloud point extraction (CPE) method for the preconcentration of ultra-trace aluminum in human albumin prior to its determination by graphite furnace atomic absorption spectrometry (GFAAS) had been developed in this paper. The CPE method was based on the complex of Al(III) with 1-(2-pyridylazo)-2-naphthol (PAN) and Triton X-114 was used as non-ionic surfactant. The main factors affecting cloud point extraction efficiency, such as pH of solution, concentration and kind of complexing agent, concentration of non-ionic surfactant, equilibration temperature and time, were investigated in detail. An enrichment factor of 34.8 was obtained for the preconcentration of Al(III) with 10 mL solution. Under the optimal conditions, the detection limit of Al(III) was 0.06 ng mL(-1). The relative standard deviation (n=7) of sample was 3.6%, values of recovery of aluminum were changed from 92.3% to 94.7% for three samples. This method is simple, accurate, sensitive and can be applied to the determination of ultra-trace aluminum in human albumin. 2009 Elsevier B.V. All rights reserved.
Probability and surprisal in auditory comprehension of morphologically complex words.
Balling, Laura Winther; Baayen, R Harald
2012-10-01
Two auditory lexical decision experiments document for morphologically complex words two points at which the probability of a target word given the evidence shifts dramatically. The first point is reached when morphologically unrelated competitors are no longer compatible with the evidence. Adapting terminology from Marslen-Wilson (1984), we refer to this as the word's initial uniqueness point (UP1). The second point is the complex uniqueness point (CUP) introduced by Balling and Baayen (2008), at which morphologically related competitors become incompatible with the input. Later initial as well as complex uniqueness points predict longer response latencies. We argue that the effects of these uniqueness points arise due to the large surprisal (Levy, 2008) carried by the phonemes at these uniqueness points, and provide independent evidence that how cumulative surprisal builds up in the course of the word co-determines response latencies. The presence of effects of surprisal, both at the initial uniqueness point of complex words, and cumulatively throughout the word, challenges the Shortlist B model of Norris and McQueen (2008), and suggests that a Bayesian approach to auditory comprehension requires complementation from information theory in order to do justice to the cognitive cost of updating probability distributions over lexical candidates. Copyright © 2012 Elsevier B.V. All rights reserved.
Computer generated hologram from point cloud using graphics processor.
Chen, Rick H-Y; Wilkinson, Timothy D
2009-12-20
Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum. We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologram plane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique.
Wilkison, D.H.; Armstrong, D.J.; Hampton, S.A.
2009-01-01
From 1998 through 2007, over 750 surface-water or bed-sediment samples in the Blue River Basin - a largely urban basin in metropolitan Kansas City - were analyzed for more than 100 anthropogenic compounds. Compounds analyzed included nutrients, fecal-indicator bacteria, suspended sediment, pharmaceuticals and personal care products. Non-point source runoff, hydrologic alterations, and numerous waste-water discharge points resulted in the routine detection of complex mixtures of anthropogenic compounds in samples from basin stream sites. Temporal and spatial variations in concentrations and loads of nutrients, pharmaceuticals, and organic wastewater compounds were observed, primarily related to a site's proximity to point-source discharges and stream-flow dynamics. ?? 2009 ASCE.
Electrical Chips for Biological Point-of-Care Detection.
Reddy, Bobby; Salm, Eric; Bashir, Rashid
2016-07-11
As the future of health care diagnostics moves toward more portable and personalized techniques, there is immense potential to harness the power of electrical signals for biological sensing and diagnostic applications at the point of care. Electrical biochips can be used to both manipulate and sense biological entities, as they can have several inherent advantages, including on-chip sample preparation, label-free detection, reduced cost and complexity, decreased sample volumes, increased portability, and large-scale multiplexing. The advantages of fully integrated electrical biochip platforms are particularly attractive for point-of-care systems. This review summarizes these electrical lab-on-a-chip technologies and highlights opportunities to accelerate the transition from academic publications to commercial success.
Kassem, Mohammed A; Amin, Alaa S
2015-02-05
A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4'-nitro-2',6'-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50°C, the surfactant-rich phase was heated again at 100°C to remove water after decantation and the remaining phase was dissolved using 0.5mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75ngmL(-1) and the detection limit was 0.15ngmL(-1) of the original solution. The enhancement factor of 500 was achieved for 250mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Interpolation Approach To Computer-Generated Holograms
NASA Astrophysics Data System (ADS)
Yatagai, Toyohiko
1983-10-01
A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.
Statistical approaches for the determination of cut points in anti-drug antibody bioassays.
Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A
2015-03-01
Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. Copyright © 2015 Elsevier B.V. All rights reserved.
Determination of Cd in urine by cloud point extraction-tungsten coil atomic absorption spectrometry.
Donati, George L; Pharr, Kathryn E; Calloway, Clifton P; Nóbrega, Joaquim A; Jones, Bradley T
2008-09-15
Cadmium concentrations in human urine are typically at or below the 1 microgL(-1) level, so only a handful of techniques may be appropriate for this application. These include sophisticated methods such as graphite furnace atomic absorption spectrometry and inductively coupled plasma mass spectrometry. While tungsten coil atomic absorption spectrometry is a simpler and less expensive technique, its practical detection limits often prohibit the detection of Cd in normal urine samples. In addition, the nature of the urine matrix often necessitates accurate background correction techniques, which would add expense and complexity to the tungsten coil instrument. This manuscript describes a cloud point extraction method that reduces matrix interference while preconcentrating Cd by a factor of 15. Ammonium pyrrolidinedithiocarbamate and Triton X-114 are used as complexing agent and surfactant, respectively, in the extraction procedure. Triton X-114 forms an extractant coacervate surfactant-rich phase that is denser than water, so the aqueous supernatant is easily removed leaving the metal-containing surfactant layer intact. A 25 microL aliquot of this preconcentrated sample is placed directly onto the tungsten coil for analysis. The cloud point extraction procedure allows for simple background correction based either on the measurement of absorption at a nearby wavelength, or measurement of absorption at a time in the atomization step immediately prior to the onset of the Cd signal. Seven human urine samples are analyzed by this technique and the results are compared to those found by the inductively coupled plasma mass spectrometry analysis of the same samples performed at a different institution. The limit of detection for Cd in urine is 5 ngL(-1) for cloud point extraction tungsten coil atomic absorption spectrometry. The accuracy of the method is determined with a standard reference material (toxic metals in freeze-dried urine) and the determined values agree with the reported levels at the 95% confidence level.
NMR study of xenotropic murine leukemia virus-related virus protease in a complex with amprenavir
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furukawa, Ayako; Okamura, Hideyasu; Morishita, Ryo
2012-08-24
Highlights: Black-Right-Pointing-Pointer Protease (PR) of XMR virus (XMRV) was successfully synthesized with cell-free system. Black-Right-Pointing-Pointer Interface of XMRV PR with an inhibitor, amprenavir (APV), was identified with NMR. Black-Right-Pointing-Pointer Structural heterogeneity is induced for two PR protomers in the APV:PR = 1:2 complex. Black-Right-Pointing-Pointer Structural heterogeneity is transmitted even to distant regions from the interface. Black-Right-Pointing-Pointer Long-range transmission of structural change may be utilized for drug discovery. -- Abstract: Xenotropic murine leukemia virus-related virus (XMRV) is a virus created through recombination of two murine leukemia proviruses under artificial conditions during the passage of human prostate cancer cells in athymic nudemore » mice. The homodimeric protease (PR) of XMRV plays a critical role in the production of functional viral proteins and is a prerequisite for viral replication. We synthesized XMRV PR using the wheat germ cell-free expression system and carried out structural analysis of XMRV PR in a complex with an inhibitor, amprenavir (APV), by means of NMR. Five different combinatorially {sup 15}N-labeled samples were prepared and backbone resonance assignments were made by applying Otting's method, with which the amino acid types of the [{sup 1}H, {sup 15}N] HSQC resonances were automatically identified using the five samples (Wu et al., 2006) . A titration experiment involving APV revealed that one APV molecule binds to one XMRV PR dimer. For many residues, two distinct resonances were observed, which is thought to be due to the structural heterogeneity between the two protomers in the APV:XMRV PR = 1:2 complex. PR residues at the interface with APV have been identified on the basis of chemical shift perturbation and identification of the intermolecular NOEs by means of filtered NOE experiments. Interestingly, chemical shift heterogeneity between the two protomers of XMRV PR has been observed not only at the interface with APV but also in regions apart from the interface. This indicates that the structural heterogeneity induced by the asymmetry of the binding of APV to the XMRV PR dimer is transmitted to distant regions. This is in contrast to the case of the APV:HIV-1 PR complex, in which the structural heterogeneity is only localized at the interface. Long-range transmission of the structural change identified for the XMRV PR complex might be utilized for the discovery of a new type of drug.« less
Improved graphite furnace atomizer
Siemer, D.D.
1983-05-18
A graphite furnace atomizer for use in graphite furnace atomic absorption spectroscopy is described wherein the heating elements are affixed near the optical path and away from the point of sample deposition, so that when the sample is volatilized the spectroscopic temperature at the optical path is at least that of the volatilization temperature, whereby analyteconcomitant complex formation is advantageously reduced. The atomizer may be elongated along its axis to increase the distance between the optical path and the sample deposition point. Also, the atomizer may be elongated along the axis of the optical path, whereby its analytical sensitivity is greatly increased.
ERIC Educational Resources Information Center
Fischer, Dan
2002-01-01
Points out the enthusiasm of students towards the complex chemical survival mechanism of some plants during the early stages of life. Uses allelopathic research to introduce students to conducting experimental research. Includes sample procedures, a timetable, and a sample grading sheet. (YDS)
Kartal Temel, Nuket; Gürkan, Ramazan
2018-03-01
A novel ultrasound assisted-cloud point extraction method was developed for preconcentration and determination of V(V) in beverage samples. After complexation by pyrogallol in presence of safranin T at pH 6.0, V(V) ions as ternary complex are extracted into the micellar phase of Triton X-114. The complex was monitored at 533 nm by spectrophotometry. The matrix effect on the recovery of V(V) from the spiked samples at 50 μg L-1 was evaluated. In optimized conditions, the limits of detection and quantification of the method, respectively, was 0.58 and 1.93 μg L-1 in linear range of 2-500 μg L-1 with sensitivity enhancement and preconcentration factors of 47.7 and 40 for preconcentration from 15 mL of sample solution. The recoveries from spiked samples were in range of 93.8-103.2% with a relative standard deviation ranging from 2.6% to 4.1% (25, 100 and 250 μg L-1, n: 5). The accuracy was verified by analysis of two certified samples, and the results were in a good agreement with the certified values. The intra-day and inter-day precision were tested by reproducibility (as 3.3-3.4%) and repeatability (as 3.4-4.1%) analysis for five replicate measurements of V(V) in quality control samples spiked with 5, 10 and 15 μg L-1. Trace V(V) contents of the selected beverage samples by the developed method were successfully determined.
Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng
2016-01-01
With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051
Tsafrir, D; Tsafrir, I; Ein-Dor, L; Zuk, O; Notterman, D A; Domany, E
2005-05-15
We introduce a novel unsupervised approach for the organization and visualization of multidimensional data. At the heart of the method is a presentation of the full pairwise distance matrix of the data points, viewed in pseudocolor. The ordering of points is iteratively permuted in search of a linear ordering, which can be used to study embedded shapes. Several examples indicate how the shapes of certain structures in the data (elongated, circular and compact) manifest themselves visually in our permuted distance matrix. It is important to identify the elongated objects since they are often associated with a set of hidden variables, underlying continuous variation in the data. The problem of determining an optimal linear ordering is shown to be NP-Complete, and therefore an iterative search algorithm with O(n3) step-complexity is suggested. By using sorting points into neighborhoods, i.e. SPIN to analyze colon cancer expression data we were able to address the serious problem of sample heterogeneity, which hinders identification of metastasis related genes in our data. Our methodology brings to light the continuous variation of heterogeneity--starting with homogeneous tumor samples and gradually increasing the amount of another tissue. Ordering the samples according to their degree of contamination by unrelated tissue allows the separation of genes associated with irrelevant contamination from those related to cancer progression. Software package will be available for academic users upon request.
Meyer, Annabel; Focks, Andreas; Radl, Viviane; Welzl, Gerhard; Schöning, Ingo; Schloter, Michael
2014-01-01
In the present study, the influence of the land use intensity on the diversity of ammonia oxidizing bacteria (AOB) and archaea (AOA) in soils from different grassland ecosystems has been investigated in spring and summer of the season (April and July). Diversity of AOA and AOB was studied by TRFLP fingerprinting of amoA amplicons. The diversity from AOB was low and dominated by a peak that could be assigned to Nitrosospira. The obtained profiles for AOB were very stable and neither influenced by the land use intensity nor by the time point of sampling. In contrast, the obtained patterns for AOA were more complex although one peak that could be assigned to Nitrosopumilus was dominating all profiles independent from the land use intensity and the sampling time point. Overall, the AOA profiles were much more dynamic than those of AOB and responded clearly to the land use intensity. An influence of the sampling time point was again not visible. Whereas AOB profiles were clearly linked to potential nitrification rates in soil, major TRFs from AOA were negatively correlated to DOC and ammonium availability and not related to potential nitrification rates.
NASA Astrophysics Data System (ADS)
Wang, Jinxia; Dou, Aixia; Wang, Xiaoqing; Huang, Shusong; Yuan, Xiaoxiang
2016-11-01
Compared to remote sensing image, post-earthquake airborne Light Detection And Ranging (LiDAR) point cloud data contains a high-precision three-dimensional information on earthquake disaster which can improve the accuracy of the identification of destroy buildings. However after the earthquake, the damaged buildings showed so many different characteristics that we can't distinguish currently between trees and damaged buildings points by the most commonly used method of pre-processing. In this study, we analyse the number of returns for given pulse of trees and damaged buildings point cloud and explore methods to distinguish currently between trees and damaged buildings points. We propose a new method by searching for a certain number of neighbourhood space and calculate the ratio(R) of points whose number of returns for given pulse greater than 1 of the neighbourhood points to separate trees from buildings. In this study, we select some point clouds of typical undamaged building, collapsed building and tree as samples from airborne LiDAR point cloud data which got after 2010 earthquake in Haiti MW7.0 by the way of human-computer interaction. Testing to get the Rvalue to distinguish between trees and buildings and apply the R-value to test testing areas. The experiment results show that the proposed method in this study can distinguish between building (undamaged and damaged building) points and tree points effectively but be limited in area where buildings various, damaged complex and trees dense, so this method will be improved necessarily.
On an Integral with Two Branch Points
ERIC Educational Resources Information Center
de Oliveira, E. Capelas; Chiacchio, Ary O.
2006-01-01
The paper considers a class of real integrals performed by using a convenient integral in the complex plane. A complex integral containing a multi-valued function with two branch points is transformed into another integral containing a pole and a unique branch point. As a by-product we obtain a new class of integrals which can be calculated in a…
Moranda, Arianna
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328
Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.
Wang, Lin; Lv, Xiangguo; Jin, Chongrui; Guo, Hailin; Shu, Huiquan; Fu, Qiang; Sa, Yinglong
2018-02-01
To develop a standardized PU-score (posterior urethral stenosis score), with the goal of using this scoring system as a preliminary predictor of surgical complexity and prognosis of posterior urethral stenosis. We retrospectively reviewed records of all patients who underwent posterior urethral surgery at our institution from 2013 to 2015. The PU-score is based on 5 components, namely etiology (1 or 2 points), location (1-3 points), length (1-3 points), urethral fistula (1 or 2 points), and posterior urethral false passage (1 point). We calculated the score of all patients and analyzed its association with surgical complexity, stenosis recurrence, intraoperative blood loss, erectile dysfunction, and urinary incontinence. There were 144 patients who underwent low complexity urethral surgery (direct vision internal urethrotomy, anastomosis with or without crural separation) with a mean score of 5.1 points, whereas 143 underwent high complexity urethroplasty (anastomosis with inferior pubectomy or urethrorectal fistula repair, perineal or scrotum skin flap urethroplasty, bladder flap urethroplasty) with a mean score of 6.9 points. The increase of PU-score was predictive of higher surgical complexity (P = .000), higher recurrence (P = .002), more intraoperative blood loss (P = .000), and decrease of preoperative (P = .037) or postoperative erectile function (P = .047). However, no association was observed between PU-score and urinary incontinence (P = .213). The PU-score is a novel and meaningful scoring system that describes the essential factors in determining the complexity and prognosis for posterior urethral stenosis. Copyright © 2017. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, J.R. Jr.
1984-04-01
Reservoir characterization of Mesaverde meanderbelt sandstones is used to determined directional continuity of permeable zones. A 500-m (1600 ft) wide fluvial meanderbelt in the Mesaverde Group is exposed as laterally continuous 3-10-m (10-33-ft) high sandstone cliffs north of Rangely, Colorado. Forty-eight detailed measured sections through 3 point bar complexes oriented at right angles to the long axis of deposition and 1 complex oriented parallel to deposition were prepared. Sections were tied together by detailed sketches delineating and tracing major bounding surfaces such as scours and clay drapes. These complexes contain 3 to 8 multilateral sandstone packages separated by 5-20 cmmore » (2-8 in.) interbedded siltstone and shale beds. Component facies are point bars, crevasse splays, chute bars, and floodplain/overbank deposits. Two types of lateral accretion surfaces are recognized in the point bar facies. Gently dipping lateral accretions containing fining-upward sandstone packages. Large scale trough cross-bedding at the base grades upward into ripples and plane beds. Steeply dipping lateral accretion surfaces enclose beds characterized by climbing ripple cross laminations. Bounding surfaces draped by shale lags can seal vertically stacked point bars from reservoir communication. Scoured boundaries allow communication in some stacked point bars. Crevasse splays showing climbing ripples form tongues of very fine-grained sandstone which flank point bars. Chute channels commonly cut upper point bar surfaces at their downstream end. Chute facies are upward-fining with small scale troughs and common dewatering structures. Siltstones and shales underlie the point bar complexes and completely encase the meanderbelt system. Bounding surfaces at the base of the complexes are erosional and contain large shale rip-up clasts.« less
Characterizing air quality data from complex network perspective.
Fan, Xinghua; Wang, Li; Xu, Huihui; Li, Shasha; Tian, Lixin
2016-02-01
Air quality depends mainly on changes in emission of pollutants and their precursors. Understanding its characteristics is the key to predicting and controlling air quality. In this study, complex networks were built to analyze topological characteristics of air quality data by correlation coefficient method. Firstly, PM2.5 (particulate matter with aerodynamic diameter less than 2.5 μm) indexes of eight monitoring sites in Beijing were selected as samples from January 2013 to December 2014. Secondly, the C-C method was applied to determine the structure of phase space. Points in the reconstructed phase space were considered to be nodes of the network mapped. Then, edges were determined by nodes having the correlation greater than a critical threshold. Three properties of the constructed networks, degree distribution, clustering coefficient, and modularity, were used to determine the optimal value of the critical threshold. Finally, by analyzing and comparing topological properties, we pointed out that similarities and difference in the constructed complex networks revealed influence factors and their different roles on real air quality system.
Which skills and factors better predict winning and losing in high-level men's volleyball?
Peña, Javier; Rodríguez-Guerra, Jorge; Buscà, Bernat; Serra, Núria
2013-09-01
The aim of this study was to determine which skills and factors better predicted the outcomes of regular season volleyball matches in the Spanish "Superliga" and were significant for obtaining positive results in the game. The study sample consisted of 125 matches played during the 2010-11 Spanish men's first division volleyball championship. Matches were played by 12 teams composed of 148 players from 17 different nations from October 2010 to March 2011. The variables analyzed were the result of the game, team category, home/away court factors, points obtained in the break point phase, number of service errors, number of service aces, number of reception errors, percentage of positive receptions, percentage of perfect receptions, reception efficiency, number of attack errors, number of blocked attacks, attack points, percentage of attack points, attack efficiency, and number of blocks performed by both teams participating in the match. The results showed that the variables of team category, points obtained in the break point phase, number of reception errors, and number of blocked attacks by the opponent were significant predictors of winning or losing the matches. Odds ratios indicated that the odds of winning a volleyball match were 6.7 times greater for the teams belonging to higher rankings and that every additional point in Complex II increased the odds of winning a match by 1.5 times. Every reception and blocked ball error decreased the possibility of winning by 0.6 and 0.7 times, respectively.
Optical Ptychographic Microscope for Quantitative Bio-Mechanical Imaging
NASA Astrophysics Data System (ADS)
Anthony, Nicholas; Cadenazzi, Guido; Nugent, Keith; Abbey, Brian
The role that mechanical forces play in biological processes such as cell movement and death is becoming of significant interest to further develop our understanding of the inner workings of cells. The most common method used to obtain stress information is photoelasticity which maps a samples birefringence, or its direction dependent refractive indices, using polarized light. However this method only provides qualitative data and for stress information to be useful quantitative data is required. Ptychography is a method for quantitatively determining the phase of a samples complex transmission function. The technique relies upon the collection of multiple overlapping coherent diffraction patterns from laterally displaced points on the sample. The overlap of measurement points provides complementary information that significantly aids in the reconstruction of the complex wavefield exiting the sample and allows for quantitative imaging of weakly interacting specimens. Here we describe recent advances at La Trobe University Melbourne on achieving quantitative birefringence mapping using polarized light ptychography with applications in cell mechanics. Australian Synchrotron, ARC Centre of Excellence for Advanced Molecular Imaging.
Paper SERS chromatography for detection of trace analytes in complex samples
NASA Astrophysics Data System (ADS)
Yu, Wei W.; White, Ian M.
2013-05-01
We report the application of paper SERS substrates for the detection of trace quantities of multiple analytes in a complex sample in the form of paper chromatography. Paper chromatography facilitates the separation of different analytes from a complex sample into distinct sections in the chromatogram, which can then be uniquely identified using SERS. As an example, the separation and quantitative detection of heroin in a highly fluorescent mixture is demonstrated. Paper SERS chromatography has obvious applications, including law enforcement, food safety, and border protection, and facilitates the rapid detection of chemical and biological threats at the point of sample.
Zhao, Lingling; Zhong, Shuxian; Fang, Keming; Qian, Zhaosheng; Chen, Jianrong
2012-11-15
A dual-cloud point extraction (d-CPE) procedure has been developed for simultaneous pre-concentration and separation of heavy metal ions (Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ion) in water samples by inductively coupled plasma optical emission spectrometry (ICP-OES). The procedure is based on forming complexes of metal ion with 8-hydroxyquinoline (8-HQ) into the as-formed Triton X-114 surfactant rich phase. Instead of direct injection or analysis, the surfactant rich phase containing the complexes was treated by nitric acid, and the detected ions were back extracted again into aqueous phase at the second cloud point extraction stage, and finally determined by ICP-OES. Under the optimum conditions (pH=7.0, Triton X-114=0.05% (w/v), 8-HQ=2.0×10(-4) mol L(-1), HNO3=0.8 mol L(-1)), the detection limits for Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ions were 0.01, 0.04, 0.01, 0.34, 0.05, and 0.04 μg L(-1), respectively. Relative standard deviation (RSD) values for 10 replicates at 100 μg L(-1) were lower than 6.0%. The proposed method could be successfully applied to the determination of Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ion in water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Affinity learning with diffusion on tensor product graph.
Yang, Xingwei; Prasad, Lakshman; Latecki, Longin Jan
2013-01-01
In many applications, we are given a finite set of data points sampled from a data manifold and represented as a graph with edge weights determined by pairwise similarities of the samples. Often the pairwise similarities (which are also called affinities) are unreliable due to noise or due to intrinsic difficulties in estimating similarity values of the samples. As observed in several recent approaches, more reliable similarities can be obtained if the original similarities are diffused in the context of other data points, where the context of each point is a set of points most similar to it. Compared to the existing methods, our approach differs in two main aspects. First, instead of diffusing the similarity information on the original graph, we propose to utilize the tensor product graph (TPG) obtained by the tensor product of the original graph with itself. Since TPG takes into account higher order information, it is not a surprise that we obtain more reliable similarities. However, it comes at the price of higher order computational complexity and storage requirement. The key contribution of the proposed approach is that the information propagation on TPG can be computed with the same computational complexity and the same amount of storage as the propagation on the original graph. We prove that a graph diffusion process on TPG is equivalent to a novel iterative algorithm on the original graph, which is guaranteed to converge. After its convergence we obtain new edge weights that can be interpreted as new, learned affinities. We stress that the affinities are learned in an unsupervised setting. We illustrate the benefits of the proposed approach for data manifolds composed of shapes, images, and image patches on two very different tasks of image retrieval and image segmentation. With learned affinities, we achieve the bull's eye retrieval score of 99.99 percent on the MPEG-7 shape dataset, which is much higher than the state-of-the-art algorithms. When the data- points are image patches, the NCut with the learned affinities not only significantly outperforms the NCut with the original affinities, but it also outperforms state-of-the-art image segmentation methods.
Thomas B. Lynch; Jeffrey H. Gove
2013-01-01
Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur...
Point sources of endocrine active compounds to aquatic environments such as waste water treatment plants, pulp and paper mills, and animal feeding operations invariably contain complex mixtures of chemicals. The current study investigates the use of targeted in vitro assays des...
Point sources of potentially endocrine active compounds to aquatic environments such as waste water treatment plants, pulp and paper mills, and animal feeding operations invariably contain complex mixtures of chemicals. The current study investigates the use of targeted in vitro ...
Analysis of macromolecules, ligands and macromolecule-ligand complexes
Von Dreele, Robert B [Los Alamos, NM
2008-12-23
A method for determining atomic level structures of macromolecule-ligand complexes through high-resolution powder diffraction analysis and a method for providing suitable microcrystalline powder for diffraction analysis are provided. In one embodiment, powder diffraction data is collected from samples of polycrystalline macromolecule and macromolecule-ligand complex and the refined structure of the macromolecule is used as an approximate model for a combined Rietveld and stereochemical restraint refinement of the macromolecule-ligand complex. A difference Fourier map is calculated and the ligand position and points of interaction between the atoms of the macromolecule and the atoms of the ligand can be deduced and visualized. A suitable polycrystalline sample of macromolecule-ligand complex can be produced by physically agitating a mixture of lyophilized macromolecule, ligand and a solvent.
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Sample size and allocation of effort in point count sampling of birds in bottomland hardwood forests
Smith, W.P.; Twedt, D.J.; Cooper, R.J.; Wiedenfeld, D.A.; Hamel, P.B.; Ford, R.P.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect of increasing the number of points or visits by comparing results of 150 four-minute point counts obtained from each of four stands on Delta Experimental Forest (DEF) during May 8-May 21, 1991 and May 30-June 12, 1992. For each stand, we obtained bootstrap estimates of mean cumulative number of species each year from all possible combinations of six points and six visits. ANOVA was used to model cumulative species as a function of number of points visited, number of visits to each point, and interaction of points and visits. There was significant variation in numbers of birds and species between regions and localities (nested within region); neither habitat, nor the interaction between region and habitat, was significant. For a = 0.05 and a = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total individuals (MSE = 9.28) and species (MSE = 3.79), respectively, whereas ? 25 percent of the mean could be achieved with five counts per factor level. Sample size sufficient to detect actual differences of Wood Thrush (Hylocichla mustelina) was >200, whereas the Prothonotary Warbler (Protonotaria citrea) required <10 counts. Differences in mean cumulative species were detected among number of points visited and among number of visits to a point. In the lower MAV, mean cumulative species increased with each added point through five points and with each additional visit through four visits. Although no interaction was detected between number of points and number of visits, when paired reciprocals were compared, more points invariably yielded a significantly greater cumulative number of species than more visits to a point. Still, 36 point counts per stand during each of two breeding seasons detected only 52 percent of the known available species pool in DEF.
What Can Quantum Optics Say about Computational Complexity Theory?
NASA Astrophysics Data System (ADS)
Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.
2015-02-01
Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.
Lifetime Occupation and Late-Life Cognitive Performance Among Women.
Ribeiro, Pricila Cristina Correa; Lourenço, Roberto Alves
2015-01-01
We examined whether women who had regular jobs throughout life performed better cognitively than older adult housewives. Linear regression was used to compare global cognitive performance scores of housewives (G1) and women exposed to work of low (G2) and high (G3) complexity. The sample comprised 477 older adult Brazilian women, 430 (90.4%) of whom had performed lifelong jobs. In work with data, the G2 group's cognitive performance scores were 1.73 points higher (p =.03), and the G3 group scored 1.76 points (p =.02) higher, than the G1. In work with things and with people, the G3 scored, respectively, 2.04 (p <.01) and 2.21 (p <.01) cognitive test points higher than the G1. Based on our findings we suggest occupation of greater complexity is associated with better cognitive performance in women later in life.
Valenza, Gaetano; Garcia, Ronald G; Citi, Luca; Scilingo, Enzo P; Tomaz, Carlos A; Barbieri, Riccardo
2015-01-01
Nonlinear digital signal processing methods that address system complexity have provided useful computational tools for helping in the diagnosis and treatment of a wide range of pathologies. More specifically, nonlinear measures have been successful in characterizing patients with mental disorders such as Major Depression (MD). In this study, we propose the use of instantaneous measures of entropy, namely the inhomogeneous point-process approximate entropy (ipApEn) and the inhomogeneous point-process sample entropy (ipSampEn), to describe a novel characterization of MD patients undergoing affective elicitation. Because these measures are built within a nonlinear point-process model, they allow for the assessment of complexity in cardiovascular dynamics at each moment in time. Heartbeat dynamics were characterized from 48 healthy controls and 48 patients with MD while emotionally elicited through either neutral or arousing audiovisual stimuli. Experimental results coming from the arousing tasks show that ipApEn measures are able to instantaneously track heartbeat complexity as well as discern between healthy subjects and MD patients. Conversely, standard heart rate variability (HRV) analysis performed in both time and frequency domains did not show any statistical significance. We conclude that measures of entropy based on nonlinear point-process models might contribute to devising useful computational tools for care in mental health.
Robustness of critical points in a complex adaptive system: Effects of hedge behavior
NASA Astrophysics Data System (ADS)
Liang, Yuan; Huang, Ji-Ping
2013-08-01
In our recent papers, we have identified a class of phase transitions in the market-directed resource-allocation game, and found that there exists a critical point at which the phase transitions occur. The critical point is given by a certain resource ratio. Here, by performing computer simulations and theoretical analysis, we report that the critical point is robust against various kinds of human hedge behavior where the numbers of herds and contrarians can be varied widely. This means that the critical point can be independent of the total number of participants composed of normal agents, herds and contrarians, under some conditions. This finding means that the critical points we identified in this complex adaptive system (with adaptive agents) may also be an intensive quantity, similar to those revealed in traditional physical systems (with non-adaptive units).
Ulusoy, Halil Ibrahim
2014-01-01
A new micelle-mediated extraction method was developed for preconcentration of ultratrace Hg(II) ions prior to spectrophotometric determination. 2-(2'-Thiazolylazo)-p-cresol (TAC) and Ponpe 7.5 were used as the chelating agent and nonionic surfactant, respectively. Hg(II) ions form a hydrophobic complex with TAC in a micelle medium. The main factors affecting cloud point extraction efficiency, such as pH of the medium, concentrations of TAC and Ponpe 7.5, and equilibration temperature and time, were investigated in detail. An overall preconcentration factor of 33.3 was obtained upon preconcentration of a 50 mL sample. The LOD obtained under the optimal conditions was 0.86 microg/L, and the RSD for five replicate measurements of 100 microg/L Hg(II) was 3.12%. The method was successfully applied to the determination of Hg in environmental water samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrault, Joeel, E-mail: joel.barrault@univ-poitiers.fr; Makhankova, Valeriya G., E-mail: leram@univ.kiev.ua; Khavryuchenko, Oleksiy V.
2012-03-15
From the selective transformation of the heterometallic (Zn-Mn or Cu-Mn) carboxylate complexes with 2,2 Prime -bipyridyl by thermal degradation at relatively low (350 Degree-Sign C) temperature, it was possible to get either well defined spinel ZnMn{sub 2}O{sub 4} over zinc oxide or well dispersed copper particles surrounded by a manganese oxide (Mn{sub 3}O{sub 4}) in a core-shell like structure. Morphology of the powder surface was examined by scanning electron microscopy with energy dispersive X-ray microanalysis (SEM/EDX). Surface composition was determined by X-ray photoelectron spectroscopy (XPS). Specific surface of the powders by nitrogen adsorption was found to be 33{+-}0.2 and 9{+-}0.06more » m{sup 2} g{sup -1} for Zn-Mn and Cu-Mn samples, respectively, which is comparable to those of commercial products. - Graphical abstract: From the selective transformation of heterometallic (Zn-Mn or Cu-Mn) carboxylate complexes, it was possible to get either well defined spinel ZnMn{sub 2}O{sub 4} over zinc oxide or well dispersed copper particles surrounded by a manganese oxide (Mn{sub 3}O{sub 4}) in a core-shell like structure. Highlights: Black-Right-Pointing-Pointer Thermal degradation of heterometallic complexes results in fine disperse particles. Black-Right-Pointing-Pointer Core-shell Cu/Mn{sub 3}O{sub 4} particles are obtained. Black-Right-Pointing-Pointer ZnMn{sub 2}O{sub 4} spinel layer covers ZnO particles.« less
Accelerated high-resolution photoacoustic tomography via compressed sensing
NASA Astrophysics Data System (ADS)
Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward
2016-12-01
Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.
Zhang, Xin; Fu, Lingdi; Geng, Yuehua; Zhai, Xiang; Liu, Yanhua
2014-03-01
Here, we administered repeated-pulse transcranial magnetic stimulation to healthy people at the left Guangming (GB37) and a mock point, and calculated the sample entropy of electroencephalo-gram signals using nonlinear dynamics. Additionally, we compared electroencephalogram sample entropy of signals in response to visual stimulation before, during, and after repeated-pulse tran-scranial magnetic stimulation at the Guangming. Results showed that electroencephalogram sample entropy at left (F3) and right (FP2) frontal electrodes were significantly different depending on where the magnetic stimulation was administered. Additionally, compared with the mock point, electroencephalogram sample entropy was higher after stimulating the Guangming point. When visual stimulation at Guangming was given before repeated-pulse transcranial magnetic stimula-tion, significant differences in sample entropy were found at five electrodes (C3, Cz, C4, P3, T8) in parietal cortex, the central gyrus, and the right temporal region compared with when it was given after repeated-pulse transcranial magnetic stimulation, indicating that repeated-pulse transcranial magnetic stimulation at Guangming can affect visual function. Analysis of electroencephalogram revealed that when visual stimulation preceded repeated pulse transcranial magnetic stimulation, sample entropy values were higher at the C3, C4, and P3 electrodes and lower at the Cz and T8 electrodes than visual stimulation followed preceded repeated pulse transcranial magnetic stimula-tion. The findings indicate that repeated-pulse transcranial magnetic stimulation at the Guangming evokes different patterns of electroencephalogram signals than repeated-pulse transcranial mag-netic stimulation at other nearby points on the body surface, and that repeated-pulse transcranial magnetic stimulation at the Guangming is associated with changes in the complexity of visually evoked electroencephalogram signals in parietal regions, central gyrus, and temporal regions.
The relevance of time series in molecular ecology and conservation biology.
Habel, Jan C; Husemann, Martin; Finger, Aline; Danley, Patrick D; Zachos, Frank E
2014-05-01
The genetic structure of a species is shaped by the interaction of contemporary and historical factors. Analyses of individuals from the same population sampled at different points in time can help to disentangle the effects of current and historical forces and facilitate the understanding of the forces driving the differentiation of populations. The use of such time series allows for the exploration of changes at the population and intraspecific levels over time. Material from museum collections plays a key role in understanding and evaluating observed population structures, especially if large numbers of individuals have been sampled from the same locations at multiple time points. In these cases, changes in population structure can be assessed empirically. The development of new molecular markers relying on short DNA fragments (such as microsatellites or single nucleotide polymorphisms) allows for the analysis of long-preserved and partially degraded samples. Recently developed techniques to construct genome libraries with a reduced complexity and next generation sequencing and their associated analysis pipelines have the potential to facilitate marker development and genotyping in non-model species. In this review, we discuss the problems with sampling and available marker systems for historical specimens and demonstrate that temporal comparative studies are crucial for the estimation of important population genetic parameters and to measure empirically the effects of recent habitat alteration. While many of these analyses can be performed with samples taken at a single point in time, the measurements are more robust if multiple points in time are studied. Furthermore, examining the effects of habitat alteration, population declines, and population bottlenecks is only possible if samples before and after the respective events are included. © 2013 The Authors. Biological Reviews © 2013 Cambridge Philosophical Society.
Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper
1993-01-01
To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.
Stege, Patricia W; Sombra, Lorena L; Messina, Germán A; Martinez, Luis D; Silva, María F
2009-05-01
Many aromatic compounds can be found in the environment as a result of anthropogenic activities and some of them are highly toxic. The need to determine low concentrations of pollutants requires analytical methods with high sensitivity, selectivity, and resolution for application to soil, sediment, water, and other environmental samples. Complex sample preparation involving analyte isolation and enrichment is generally necessary before the final analysis. The present paper outlines a novel, simple, low-cost, and environmentally friendly method for the simultaneous determination of p-nitrophenol (PNP), p-aminophenol (PAP), and hydroquinone (HQ) by micellar electrokinetic capillary chromatography after preconcentration by cloud point extraction. Enrichment factors of 180 to 200 were achieved. The limits of detection of the analytes for the preconcentration of 50-ml sample volume were 0.10 microg L(-1) for PNP, 0.20 microg L(-1) for PAP, and 0.16 microg L(-1) for HQ. The optimized procedure was applied to the determination of phenolic pollutants in natural waters from San Luis, Argentina.
NASA Astrophysics Data System (ADS)
Oriani, Fabio
2017-04-01
The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002
A novel image registration approach via combining local features and geometric invariants
Lu, Yan; Gao, Kun; Zhang, Tinghua; Xu, Tingfa
2018-01-01
Image registration is widely used in many fields, but the adaptability of the existing methods is limited. This work proposes a novel image registration method with high precision for various complex applications. In this framework, the registration problem is divided into two stages. First, we detect and describe scale-invariant feature points using modified computer vision-oriented fast and rotated brief (ORB) algorithm, and a simple method to increase the performance of feature points matching is proposed. Second, we develop a new local constraint of rough selection according to the feature distances. Evidence shows that the existing matching techniques based on image features are insufficient for the images with sparse image details. Then, we propose a novel matching algorithm via geometric constraints, and establish local feature descriptions based on geometric invariances for the selected feature points. Subsequently, a new price function is constructed to evaluate the similarities between points and obtain exact matching pairs. Finally, we employ the progressive sample consensus method to remove wrong matches and calculate the space transform parameters. Experimental results on various complex image datasets verify that the proposed method is more robust and significantly reduces the rate of false matches while retaining more high-quality feature points. PMID:29293595
Statistical aspects of point count sampling
Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.
Current trends in sample preparation for cosmetic analysis.
Zhong, Zhixiong; Li, Gongke
2017-01-01
The widespread applications of cosmetics in modern life make their analysis particularly important from a safety point of view. There is a wide variety of restricted ingredients and prohibited substances that primarily influence the safety of cosmetics. Sample preparation for cosmetic analysis is a crucial step as the complex matrices may seriously interfere with the determination of target analytes. In this review, some new developments (2010-2016) in sample preparation techniques for cosmetic analysis, including liquid-phase microextraction, solid-phase microextraction, matrix solid-phase dispersion, pressurized liquid extraction, cloud point extraction, ultrasound-assisted extraction, and microwave digestion, are presented. Furthermore, the research and progress in sample preparation techniques and their applications in the separation and purification of allowed ingredients and prohibited substances are reviewed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Schünemann, Adriano Luis; Inácio Fernandes Filho, Elpídio; Rocha Francelino, Marcio; Rodrigues Santos, Gérson; Thomazini, Andre; Batista Pereira, Antônio; Gonçalves Reynaud Schaefer, Carlos Ernesto
2017-04-01
The knowledge of environmental variables values, in non-sampled sites from a minimum data set can be accessed through interpolation technique. Kriging and the classifier Random Forest algorithm are examples of predictors with this aim. The objective of this work was to compare methods of soil attributes spatialization in a recent deglaciated environment with complex landforms. Prediction of the selected soil attributes (potassium, calcium and magnesium) from ice-free areas were tested by using morphometric covariables, and geostatistical models without these covariables. For this, 106 soil samples were collected at 0-10 cm depth in Keller Peninsula, King George Island, Maritime Antarctica. Soil chemical analysis was performed by the gravimetric method, determining values of potassium, calcium and magnesium for each sampled point. Digital terrain models (DTMs) were obtained by using Terrestrial Laser Scanner. DTMs were generated from a cloud of points with spatial resolutions of 1, 5, 10, 20 and 30 m. Hence, 40 morphometric covariates were generated. Simple Kriging was performed using the R package software. The same data set coupled with morphometric covariates, was used to predict values of the studied attributes in non-sampled sites through Random Forest interpolator. Little differences were observed on the DTMs generated by Simple kriging and Random Forest interpolators. Also, DTMs with better spatial resolution did not improved the quality of soil attributes prediction. Results revealed that Simple Kriging can be used as interpolator when morphometric covariates are not available, with little impact regarding quality. It is necessary to go further in soil chemical attributes prediction techniques, especially in periglacial areas with complex landforms.
Rubínová, Eva; Nikolai, Tomáš; Marková, Hana; Siffelová, Kamila; Laczó, Jan; Hort, Jakub; Vyhnálek, Martin
2014-01-01
The Clock Drawing Test is a frequently used cognitive screening test with several scoring systems in elderly populations. We compare simple and complex scoring systems and evaluate the usefulness of the combination of the Clock Drawing Test with the Mini-Mental State Examination to detect patients with mild cognitive impairment. Patients with amnestic mild cognitive impairment (n = 48) and age- and education-matched controls (n = 48) underwent neuropsychological examinations, including the Clock Drawing Test and the Mini-Mental State Examination. Clock drawings were scored by three blinded raters using one simple (6-point scale) and two complex (17- and 18-point scales) systems. The sensitivity and specificity of these scoring systems used alone and in combination with the Mini-Mental State Examination were determined. Complex scoring systems, but not the simple scoring system, were significant predictors of the amnestic mild cognitive impairment diagnosis in logistic regression analysis. At equal levels of sensitivity (87.5%), the Mini-Mental State Examination showed higher specificity (31.3%, compared with 12.5% for the 17-point Clock Drawing Test scoring scale). The combination of Clock Drawing Test and Mini-Mental State Examination scores increased the area under the curve (0.72; p < .001) and increased specificity (43.8%), but did not increase sensitivity, which remained high (85.4%). A simple 6-point scoring system for the Clock Drawing Test did not differentiate between healthy elderly and patients with amnestic mild cognitive impairment in our sample. Complex scoring systems were slightly more efficient, yet still were characterized by high rates of false-positive results. We found psychometric improvement using combined scores from the Mini-Mental State Examination and the Clock Drawing Test when complex scoring systems were used. The results of this study support the benefit of using combined scores from simple methods.
Composite analysis for Escherichia coli at coastal beaches
Bertke, E.E.
2007-01-01
At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.
Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger
2013-01-01
A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.
Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud
NASA Astrophysics Data System (ADS)
Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.
2018-04-01
Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.
Goebel, L; Zurakowski, D; Müller, A; Pape, D; Cucchiarini, M; Madry, H
2014-10-01
To compare the 2D and 3D MOCART system obtained with 9.4 T high-field magnetic resonance imaging (MRI) for the ex vivo analysis of osteochondral repair in a translational model and to correlate the data with semiquantitative histological analysis. Osteochondral samples representing all levels of repair (sheep medial femoral condyles; n = 38) were scanned in a 9.4 T high-field MRI. The 2D and adapted 3D MOCART systems were used for grading after point allocation to each category. Each score was correlated with corresponding reconstructions between both MOCART systems. Data were next correlated with corresponding categories of an elementary (Wakitani) and a complex (Sellers) histological scoring system as gold standards. Correlations between most 2D and 3D MOCART score categories were high, while mean total point values of 3D MOCART scores tended to be 15.8-16.1 points higher compared to the 2D MOCART scores based on a Bland-Altman analysis. "Defect fill" and "total points" of both MOCART scores correlated with corresponding categories of Wakitani and Sellers scores (all P ≤ 0.05). "Subchondral bone plate" also correlated between 3D MOCART and Sellers scores (P < 0.001). Most categories of the 2D and 3D MOCART systems correlate, while total scores were generally higher using the 3D MOCART system. Structural categories "total points" and "defect fill" can reliably be assessed by 9.4 T MRI evaluation using either system, "subchondral bone plate" using the 3D MOCART score. High-field MRI is valuable to objectively evaluate osteochondral repair in translational settings. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-06-17
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-01-01
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279
Determination of total selenium in food samples by d-CPE and HG-AFS.
Wang, Mei; Zhong, Yizhou; Qin, Jinpeng; Zhang, Zehua; Li, Shan; Yang, Bingyi
2017-07-15
A dual-cloud point extraction (d-CPE) procedure was developed for the simultaneous preconcentration and determination of trace level Se in food samples by hydride generation-atomic fluorescence spectrometry (HG-AFS). The Se(IV) was complexed with ammonium pyrrolidinedithiocarbamate (APDC) in a Triton X-114 surfactant-rich phase, which was then treated with a mixture of 16% (v/v) HCl and 20% (v/v) H 2 O 2 . This converted the Se(IV)-APDC into free Se(IV), which was back extracted into an aqueous phase at the second cloud point extraction stage. This aqueous phase was analyzed directly by HG-AFS. Optimization of the experimental conditions gave a limit of detection of 0.023μgL -1 with an enhancement factor of 11.8 when 50mL of sample solution was preconcentrated to 3mL. The relative standard deviation was 4.04% (c=6.0μgL -1 , n=10). The proposed method was applied to determine the Se contents in twelve food samples with satisfactory recoveries of 95.6-105.2%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multibeam 3D Underwater SLAM with Probabilistic Registration.
Palomer, Albert; Ridao, Pere; Ribas, David
2016-04-20
This paper describes a pose-based underwater 3D Simultaneous Localization and Mapping (SLAM) using a multibeam echosounder to produce high consistency underwater maps. The proposed algorithm compounds swath profiles of the seafloor with dead reckoning localization to build surface patches (i.e., point clouds). An Iterative Closest Point (ICP) with a probabilistic implementation is then used to register the point clouds, taking into account their uncertainties. The registration process is divided in two steps: (1) point-to-point association for coarse registration and (2) point-to-plane association for fine registration. The point clouds of the surfaces to be registered are sub-sampled in order to decrease both the computation time and also the potential of falling into local minima during the registration. In addition, a heuristic is used to decrease the complexity of the association step of the ICP from O(n2) to O(n) . The performance of the SLAM framework is tested using two real world datasets: First, a 2.5D bathymetric dataset obtained with the usual down-looking multibeam sonar configuration, and second, a full 3D underwater dataset acquired with a multibeam sonar mounted on a pan and tilt unit.
Ontsira Ngoyi, E N; Obengui; Taty Taty, R; Koumba, E L; Ngala, P; Ossibi Ibara, R B
2014-12-01
The aim of the present work was to describe mycobacteria species isolated in the antituberculosis center of Pointe-Noire city in Congo Brazzaville. It was a descriptive transversal study, conducted between September 2008 and April 2009 (7 months). A simple random sample was established from patients who came to the antituberculosis center of Pointe-Noire City (reference center on diagnosis and treatment of tuberculosis). To those patients consulting with symptoms leading to suspect pulmonary tuberculosis, a sputum sampling in three sessions was conducted. Staining techniques to Ziehl-Neelsen and auramine were performed in Pointe-Noire. Culture, molecular hybridization and antibiotic susceptibility testing to first-line antituberculosis drugs (isoniazid, rifampicin, ethambutol, pyrazinamide or streptomycine) using diffusion method on agar were performed in Cerba Pasteur laboratory in France. In 77 patients, 24 sputum (31.20%) were positive to the microscopic examination and 45 (58.44%) to the culture and identification by molecular hybridization. Mycobacteria species complex isolated were M. tuberculosis with 31 cases (68.9%) and M. africanum with 3 cases (6.67%). Non-tuberculous mycobacteria (NMT) were isolated in association or not with M. tuberculosis in 9 cases (20%) and the most common species were M. intracellulare. In M. tuberculosis species, 7 strains (41.20%) were tested sensitive to the first-line antituberculosis drugs, 8 cases (47%) monoresistance and 2 cases multidrug resistance at both isoniazide and rifampicine (12%) (MDR). This study showed the importance of Mycobacteria species complex and non-mycobacteria species in pulmonary tuberculosis. The data on resistance can help medical physicians in the treatment of pulmonary tuberculosis. Another study with a large population is required to confirm these data.
Scalable boson sampling with time-bin encoding using a loop-based architecture.
Motes, Keith R; Gilchrist, Alexei; Dowling, Jonathan P; Rohde, Peter P
2014-09-19
We present an architecture for arbitrarily scalable boson sampling using two nested fiber loops. The architecture has fixed experimental complexity, irrespective of the size of the desired interferometer, whose scale is limited only by fiber and switch loss rates. The architecture employs time-bin encoding, whereby the incident photons form a pulse train, which enters the loops. Dynamically controlled loop coupling ratios allow the construction of the arbitrary linear optics interferometers required for boson sampling. The architecture employs only a single point of interference and may thus be easier to stabilize than other approaches. The scheme has polynomial complexity and could be realized using demonstrated present-day technologies.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Walicka, A.; Jóźków, G.; Borkowski, A.
2018-05-01
The fluvial transport is an important aspect of hydrological and geomorphologic studies. The knowledge about the movement parameters of different-size fractions is essential in many applications, such as the exploration of the watercourse changes, the calculation of the river bed parameters or the investigation of the frequency and the nature of the weather events. Traditional techniques used for the fluvial transport investigations do not provide any information about the long-term horizontal movement of the rocks. This information can be gained by means of terrestrial laser scanning (TLS). However, this is a complex issue consisting of several stages of data processing. In this study the methodology for individual rocks segmentation from TLS point cloud has been proposed, which is the first step for the semi-automatic algorithm for movement detection of individual rocks. The proposed algorithm is executed in two steps. Firstly, the point cloud is classified as rocks or background using only geometrical information. Secondly, the DBSCAN algorithm is executed iteratively on points classified as rocks until only one stone is detected in each segment. The number of rocks in each segment is determined using principal component analysis (PCA) and simple derivative method for peak detection. As a result, several segments that correspond to individual rocks are formed. Numerical tests were executed on two test samples. The results of the semi-automatic segmentation were compared to results acquired by manual segmentation. The proposed methodology enabled to successfully segment 76 % and 72 % of rocks in the test sample 1 and test sample 2, respectively.
NASA Astrophysics Data System (ADS)
Zeng, Yayun; Wang, Jun; Xu, Kaixuan
2017-04-01
A new financial agent-based time series model is developed and investigated by multiscale-continuum percolation system, which can be viewed as an extended version of continuum percolation system. In this financial model, for different parameters of proportion and density, two Poisson point processes (where the radii of points represent the ability of receiving or transmitting information among investors) are applied to model a random stock price process, in an attempt to investigate the fluctuation dynamics of the financial market. To validate its effectiveness and rationality, we compare the statistical behaviors and the multifractal behaviors of the simulated data derived from the proposed model with those of the real stock markets. Further, the multiscale sample entropy analysis is employed to study the complexity of the returns, and the cross-sample entropy analysis is applied to measure the degree of asynchrony of return autocorrelation time series. The empirical results indicate that the proposed financial model can simulate and reproduce some significant characteristics of the real stock markets to a certain extent.
Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2018-06-04
Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
Gasco, Jaime; Braun, Jonathan D; McCutcheon, Ian E; Black, Peter M
2011-01-01
To objectively compare the complexity and diversity of the certification process in neurological surgery in member societies of the World Federation of Neurosurgical Societies. This study centers in continental Asia. We provide here an analysis based on the responses provided to a 13-item survey. The data received were analyzed, and three Regional Complexity Scores (RCS) were designed. To compare national board experience, eligibility requirements for access to the certification process, and the obligatory nature of the examinations, an RCS-Organizational score was created (20 points maximum). To analyze the complexity of the examination, an RCS-Components score was designed (20 points maximum). The sum of both is presented in a Global RCS score. Only those countries that responded to the survey and presented nationwide homogeneity in the conduction of neurosurgery examinations could be included within the scoring system. In addition, a descriptive summary of the certification process per responding society is also provided. On the basis of the data provided by our RCS system, the highest global RCS was achieved by South Korea and Malaysia (21/40 points) followed by the joint examination of Singapore and Hong-Kong (FRCS-Ed) (20/40 points), Japan (17/40 points), the Philippines (15/40 points), and Taiwan (13 points). The experience from these leading countries should be of value to all countries within Asia. Copyright © 2011 Elsevier Inc. All rights reserved.
Pendleton, G.W.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation in detection probabilities and lack of independence among sample points can bias estimates and measures of precision. All of these factors should be con-sidered when using point count methods.
A temperature characteristic research and compensation design for micro-machined gyroscope
NASA Astrophysics Data System (ADS)
Fu, Qiang; di, Xin-Peng; Chen, Wei-Ping; Yin, Liang; Liu, Xiao-Wei
2017-02-01
The all temperature range stability is the most important technology of MEMS angular velocity sensor according to the principle of capacity detecting. The correlation between driven force and zero-point of sensor is summarized according to the temperature characteristic of the air-damping and resonant frequency of sensor header. A constant trans-conductance high-linearity amplifier is designed to realize the low phase-drift and low amplitude-drift interface circuit at all-temperature range. The chip is fabricated in a standard 0.5 μm CMOS process. Compensation achieved by driven force to zero-point drift caused by the stiffness of physical construction and air-damping is adopted. Moreover, the driven force can be obtained from the drive-circuit to avoid the complex sampling. The test result shows that the zero-point drift is lower than 30∘/h (1-sigma) at the temperature range from -40∘C to 60∘C after three-order compensation made by driven force.
A Modular Low-Complexity ECG Delineation Algorithm for Real-Time Embedded Systems.
Bote, Jose Manuel; Recas, Joaquin; Rincon, Francisco; Atienza, David; Hermida, Roman
2018-03-01
This work presents a new modular and low-complexity algorithm for the delineation of the different ECG waves (QRS, P and T peaks, onsets, and end). Involving a reduced number of operations per second and having a small memory footprint, this algorithm is intended to perform real-time delineation on resource-constrained embedded systems. The modular design allows the algorithm to automatically adjust the delineation quality in runtime to a wide range of modes and sampling rates, from a ultralow-power mode when no arrhythmia is detected, in which the ECG is sampled at low frequency, to a complete high-accuracy delineation mode, in which the ECG is sampled at high frequency and all the ECG fiducial points are detected, in the case of arrhythmia. The delineation algorithm has been adjusted using the QT database, providing very high sensitivity and positive predictivity, and validated with the MIT database. The errors in the delineation of all the fiducial points are below the tolerances given by the Common Standards for Electrocardiography Committee in the high-accuracy mode, except for the P wave onset, for which the algorithm is above the agreed tolerances by only a fraction of the sample duration. The computational load for the ultralow-power 8-MHz TI MSP430 series microcontroller ranges from 0.2% to 8.5% according to the mode used.
How Do Academic Disciplines Use PowerPoint?
ERIC Educational Resources Information Center
Garrett, Nathan
2016-01-01
How do academic disciplines use PowerPoint? This project analyzed PowerPoint files created by an academic publisher to supplement textbooks. An automated analysis of 30,263 files revealed clear differences by disciplines. Single-paradigm "hard" disciplines used less complex writing but had more words than multi-paradigm "soft"…
Point of Injury Sampling Technology for Battlefield Molecular Diagnostics
2011-11-14
Injury" Sampling Technology for Battlefield Molecular Diagnostics November 14, 2011 Sponsored by Defense Advanced Research Projects Agency (DOD...Date of Contract: April 25, 2011 Short Title of Work: "Point of Injury" Sampling Technology for Battlefield Molecular Diagnostics " Contract...PHASE I FINAL REPORT: Point of Injury, Sampling Technology for Battlefield Molecular Diagnostics . W31P4Q-11-C-0222 (UNCLASSIFIED) P.I: Bernardo
Combining SVM and flame radiation to forecast BOF end-point
NASA Astrophysics Data System (ADS)
Wen, Hongyuan; Zhao, Qi; Xu, Lingfei; Zhou, Munchun; Chen, Yanru
2009-05-01
Because of complex reactions in Basic Oxygen Furnace (BOF) for steelmaking, the main end-point control methods of steelmaking have insurmountable difficulties. Aiming at these problems, a support vector machine (SVM) method for forecasting the BOF steelmaking end-point is presented based on flame radiation information. The basis is that the furnace flame is the performance of the carbon oxygen reaction, because the carbon oxygen reaction is the major reaction in the steelmaking furnace. The system can acquire spectrum and image data quickly in the steelmaking adverse environment. The structure of SVM and the multilayer feed-ward neural network are similar, but SVM model could overcome the inherent defects of the latter. The model is trained and forecasted by using SVM and some appropriate variables of light and image characteristic information. The model training process follows the structure risk minimum (SRM) criterion and the design parameter can be adjusted automatically according to the sampled data in the training process. Experimental results indicate that the prediction precision of the SVM model and the executive time both meet the requirements of end-point judgment online.
Mueller, Silke C; Drewelow, Bernd
2013-05-01
The area under the concentration-time curve (AUC) after oral midazolam administration is commonly used for cytochrome P450 (CYP) 3A phenotyping studies. The aim of this investigation was to evaluate a limited sampling strategy for the prediction of AUC with oral midazolam. A total of 288 concentration-time profiles from 123 healthy volunteers who participated in four previously performed drug interaction studies with intense sampling after a single oral dose of 7.5 mg midazolam were available for evaluation. Of these, 45 profiles served for model building, which was performed by stepwise multiple linear regression, and the remaining 243 datasets served for validation. Mean prediction error (MPE), mean absolute error (MAE) and root mean squared error (RMSE) were calculated to determine bias and precision The one- to four-sampling point models with the best coefficient of correlation were the one-sampling point model (8 h; r (2) = 0.84), the two-sampling point model (0.5 and 8 h; r (2) = 0.93), the three-sampling point model (0.5, 2, and 8 h; r (2) = 0.96), and the four-sampling point model (0.5,1, 2, and 8 h; r (2) = 0.97). However, the one- and two-sampling point models were unable to predict the midazolam AUC due to unacceptable bias and precision. Only the four-sampling point model predicted the very low and very high midazolam AUC of the validation dataset with acceptable precision and bias. The four-sampling point model was also able to predict the geometric mean ratio of the treatment phase over the baseline (with 90 % confidence interval) results of three drug interaction studies in the categories of strong, moderate, and mild induction, as well as no interaction. A four-sampling point limited sampling strategy to predict the oral midazolam AUC for CYP3A phenotyping is proposed. The one-, two- and three-sampling point models were not able to predict midazolam AUC accurately.
NASA Astrophysics Data System (ADS)
Zeng, Zhenxiang; Zheng, Huadong; Yu, Yingjie; Asundi, Anand K.
2017-06-01
A method for calculating off-axis phase-only holograms of three-dimensional (3D) object using accelerated point-based Fresnel diffraction algorithm (PB-FDA) is proposed. The complex amplitude of the object points on the z-axis in hologram plane is calculated using Fresnel diffraction formula, called principal complex amplitudes (PCAs). The complex amplitudes of those off-axis object points of the same depth can be obtained by 2D shifting of PCAs. In order to improve the calculating speed of the PB-FDA, the convolution operation based on fast Fourier transform (FFT) is used to calculate the holograms rather than using the point-by-point spatial 2D shifting of the PCAs. The shortest recording distance of the PB-FDA is analyzed in order to remove the influence of multiple-order images in reconstructed images. The optimal recording distance of the PB-FDA is also analyzed to improve the quality of reconstructed images. Numerical reconstructions and optical reconstructions with a phase-only spatial light modulator (SLM) show that holographic 3D display is feasible with the proposed algorithm. The proposed PB-FDA can also avoid the influence of the zero-order image introduced by SLM in optical reconstructed images.
Definition of NASTRAN sets by use of parametric geometry
NASA Technical Reports Server (NTRS)
Baughn, Terry V.; Tiv, Mehran
1989-01-01
Many finite element preprocessors describe finite element model geometry with points, lines, surfaces and volumes. One method for describing these basic geometric entities is by use of parametric cubics which are useful for representing complex shapes. The lines, surfaces and volumes may be discretized for follow on finite element analysis. The ability to limit or selectively recover results from the finite element model is extremely important to the analyst. Equally important is the ability to easily apply boundary conditions. Although graphical preprocessors have made these tasks easier, model complexity may not lend itself to easily identify a group of grid points desired for data recovery or application of constraints. A methodology is presented which makes use of the assignment of grid point locations in parametric coordinates. The parametric coordinates provide a convenient ordering of the grid point locations and a method for retrieving the grid point ID's from the parent geometry. The selected grid points may then be used for the generation of the appropriate set and constraint cards.
’Point of Injury’ Sampling Technology for Battlefield Molecular Diagnostics
2012-03-17
Injury" Sampling Technology for Battlefield Molecular Diagnostics March 17,2012 Sponsored by Defense Advanced Research Projects Agency (DOD) Defense...Contract: April 25, 2011 Short Title of Work: "Point of Injury" Sampling Technology for Battlefield Molecular Diagnostics " Contract Expiration Date...SBIR PHASE I OPTION REPORT: Point of Injury, Sampling Technology for Battlefield Molecular Diagnostics . W31P4Q-1 l-C-0222 (UNCLASSIFIED) P.I
Boiling point measurement of a small amount of brake fluid by thermocouple and its application.
Mogami, Kazunari
2002-09-01
This study describes a new method for measuring the boiling point of a small amount of brake fluid using a thermocouple and a pear shaped flask. The boiling point of brake fluid was directly measured with an accuracy that was within approximately 3 C of that determined by the Japanese Industrial Standards method, even though the sample volume was only a few milliliters. The method was applied to measure the boiling points of brake fluid samples from automobiles. It was clear that the boiling points of brake fluid from some automobiles dropped to approximately 140 C from about 230 C, and that one of the samples from the wheel cylinder was approximately 45 C lower than brake fluid from the reserve tank. It is essential to take samples from the wheel cylinder, as this is most easily subjected to heating.
Toogood, Helen S; Leys, David; Scrutton, Nigel S
2007-11-01
Electron transferring flavoproteins (ETFs) are soluble heterodimeric FAD-containing proteins that function primarily as soluble electron carriers between various flavoprotein dehydrogenases. ETF is positioned at a key metabolic branch point, responsible for transferring electrons from up to 10 primary dehydrogenases to the membrane-bound respiratory chain. Clinical mutations of ETF result in the often fatal disease glutaric aciduria type II. Structural and biophysical studies of ETF in complex with partner proteins have shown that ETF partitions the functions of partner binding and electron transfer between (a) a 'recognition loop', which acts as a static anchor at the ETF-partner interface, and (b) a highly mobile redox-active FAD domain. Together, this enables the FAD domain of ETF to sample a range of conformations, some compatible with fast interprotein electron transfer. This 'conformational sampling' enables ETF to recognize structurally distinct partners, whilst also maintaining a degree of specificity. Complex formation triggers mobility of the FAD domain, an 'induced disorder' mechanism contrasting with the more generally accepted models of protein-protein interaction by induced fit mechanisms. We discuss the implications of the highly dynamic nature of ETFs in biological interprotein electron transfer. ETF complexes point to mechanisms of electron transfer in which 'dynamics drive function', a feature that is probably widespread in biology given the modular assembly and flexible nature of biological electron transfer systems.
Szydzik, C; Gavela, A F; Herranz, S; Roccisano, J; Knoerzer, M; Thurgood, P; Khoshmanesh, K; Mitchell, A; Lechuga, L M
2017-08-08
A primary limitation preventing practical implementation of photonic biosensors within point-of-care platforms is their integration with fluidic automation subsystems. For most diagnostic applications, photonic biosensors require complex fluid handling protocols; this is especially prominent in the case of competitive immunoassays, commonly used for detection of low-concentration, low-molecular weight biomarkers. For this reason, complex automated microfluidic systems are needed to realise the full point-of-care potential of photonic biosensors. To fulfil this requirement, we propose an on-chip valve-based microfluidic automation module, capable of automating such complex fluid handling. This module is realised through application of a PDMS injection moulding fabrication technique, recently described in our previous work, which enables practical fabrication of normally closed pneumatically actuated elastomeric valves. In this work, these valves are configured to achieve multiplexed reagent addressing for an on-chip diaphragm pump, providing the sample and reagent processing capabilities required for automation of cyclic competitive immunoassays. Application of this technique simplifies fabrication and introduces the potential for mass production, bringing point-of-care integration of complex automated microfluidics into the realm of practicality. This module is integrated with a highly sensitive, label-free bimodal waveguide photonic biosensor, and is demonstrated in the context of a proof-of-concept biosensing assay, detecting the low-molecular weight antibiotic tetracycline.
NASA Astrophysics Data System (ADS)
Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.
2017-01-01
Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.
Hoang, Phuong Le; Ahn, Sanghoon; Kim, Jeng-o; Kang, Heeshin; Noh, Jiwhan
2017-01-01
In modern high-intensity ultrafast laser processing, detecting the focal position of the working laser beam, at which the intensity is the highest and the beam diameter is the lowest, and immediately locating the target sample at that point are challenging tasks. A system that allows in-situ real-time focus determination and fabrication using a high-power laser has been in high demand among both engineers and scientists. Conventional techniques require the complicated mathematical theory of wave optics, employing interference as well as diffraction phenomena to detect the focal position; however, these methods are ineffective and expensive for industrial application. Moreover, these techniques could not perform detection and fabrication simultaneously. In this paper, we propose an optical design capable of detecting the focal point and fabricating complex patterns on a planar sample surface simultaneously. In-situ real-time focus detection is performed using a bandpass filter, which only allows for the detection of laser transmission. The technique enables rapid, non-destructive, and precise detection of the focal point. Furthermore, it is sufficiently simple for application in both science and industry for mass production, and it is expected to contribute to the next generation of laser equipment, which can be used to fabricate micro-patterns with high complexity. PMID:28671566
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Challenges in early clinical development of adjuvanted vaccines.
Della Cioppa, Giovanni; Jonsdottir, Ingileif; Lewis, David
2015-06-08
A three-step approach to the early development of adjuvanted vaccine candidates is proposed, the goal of which is to allow ample space for exploratory and hypothesis-generating human experiments and to select dose(s) and dosing schedule(s) to bring into full development. Although the proposed approach is more extensive than the traditional early development program, the authors suggest that by addressing key questions upfront the overall time, size and cost of development will be reduced and the probability of public health advancement enhanced. The immunogenicity end-points chosen for early development should be critically selected: an established immunological parameter with a well characterized assay should be selected as primary end-point for dose and schedule finding; exploratory information-rich end-points should be limited in number and based on pre-defined hypothesis generating plans, including system biology and pathway analyses. Building a pharmacodynamic profile is an important aspect of early development: to this end, multiple early (within 24h) and late (up to one year) sampling is necessary, which can be accomplished by sampling subgroups of subjects at different time points. In most cases the final target population, even if vulnerable, should be considered for inclusion in early development. In order to obtain the multiple formulations necessary for the dose and schedule finding, "bed-side mixing" of various components of the vaccine is often necessary: this is a complex and underestimated area that deserves serious research and logistical support. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sampling saddle points on a free energy surface
NASA Astrophysics Data System (ADS)
Samanta, Amit; Chen, Ming; Yu, Tang-Qing; Tuckerman, Mark; E, Weinan
2014-04-01
Many problems in biology, chemistry, and materials science require knowledge of saddle points on free energy surfaces. These saddle points act as transition states and are the bottlenecks for transitions of the system between different metastable states. For simple systems in which the free energy depends on a few variables, the free energy surface can be precomputed, and saddle points can then be found using existing techniques. For complex systems, where the free energy depends on many degrees of freedom, this is not feasible. In this paper, we develop an algorithm for finding the saddle points on a high-dimensional free energy surface "on-the-fly" without requiring a priori knowledge the free energy function itself. This is done by using the general strategy of the heterogeneous multi-scale method by applying a macro-scale solver, here the gentlest ascent dynamics algorithm, with the needed force and Hessian values computed on-the-fly using a micro-scale model such as molecular dynamics. The algorithm is capable of dealing with problems involving many coarse-grained variables. The utility of the algorithm is illustrated by studying the saddle points associated with (a) the isomerization transition of the alanine dipeptide using two coarse-grained variables, specifically the Ramachandran dihedral angles, and (b) the beta-hairpin structure of the alanine decamer using 20 coarse-grained variables, specifically the full set of Ramachandran angle pairs associated with each residue. For the alanine decamer, we obtain a detailed network showing the connectivity of the minima obtained and the saddle-point structures that connect them, which provides a way to visualize the gross features of the high-dimensional surface.
THE POPULATION OF COMPACT RADIO SOURCES IN THE ORION NEBULA CLUSTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forbrich, J.; Meingast, S.; Rivilla, V. M.
We present a deep centimeter-wavelength catalog of the Orion Nebula Cluster (ONC), based on a 30 hr single-pointing observation with the Karl G. Jansky Very Large Array in its high-resolution A-configuration using two 1 GHz bands centered at 4.7 and 7.3 GHz. A total of 556 compact sources were detected in a map with a nominal rms noise of 3 μ Jy bm{sup −1}, limited by complex source structure and the primary beam response. Compared to previous catalogs, our detections increase the sample of known compact radio sources in the ONC by more than a factor of seven. The newmore » data show complex emission on a wide range of spatial scales. Following a preliminary correction for the wideband primary-beam response, we determine radio spectral indices for 170 sources whose index uncertainties are less than ±0.5. We compare the radio to the X-ray and near-infrared point-source populations, noting similarities and differences.« less
DNA-cisplatin binding mechanism peculiarities studied with single molecule stretching experiments
NASA Astrophysics Data System (ADS)
Crisafuli, F. A. P.; Cesconetto, E. C.; Ramos, E. B.; Rocha, M. S.
2012-02-01
We propose a method to determine the DNA-cisplatin binding mechanism peculiarities by monitoring the mechanical properties of these complexes. To accomplish this task, we have performed single molecule stretching experiments by using optical tweezers, from which the persistence and contour lengths of the complexes can be promptly measured. The persistence length of the complexes as a function of the drug total concentration in the sample was used to deduce the binding data, from which we show that cisplatin binds cooperatively to the DNA molecule, a point which so far has not been stressed in binding equilibrium studies of this ligand.
Sensor-Topology Based Simplicial Complex Reconstruction from Mobile Laser Scanning
NASA Astrophysics Data System (ADS)
Guinard, S.; Vallet, B.
2018-05-01
We propose a new method for the reconstruction of simplicial complexes (combining points, edges and triangles) from 3D point clouds from Mobile Laser Scanning (MLS). Our main goal is to produce a reconstruction of a scene that is adapted to the local geometry of objects. Our method uses the inherent topology of the MLS sensor to define a spatial adjacency relationship between points. We then investigate each possible connexion between adjacent points and filter them by searching collinear structures in the scene, or structures perpendicular to the laser beams. Next, we create triangles for each triplet of self-connected edges. Last, we improve this method with a regularization based on the co-planarity of triangles and collinearity of remaining edges. We compare our results to a naive simplicial complexes reconstruction based on edge length.
CePt2In7: Shubnikov-de Haas measurements on micro-structured samples under high pressures
NASA Astrophysics Data System (ADS)
Kanter, J.; Moll, P.; Friedemann, S.; Alireza, P.; Sutherland, M.; Goh, S.; Ronning, F.; Bauer, E. D.; Batlogg, B.
2014-03-01
CePt2In7 belongs to the CemMnIn3 m + 2 n heavy fermion family, but compared to the Ce MIn5 members of this group, exhibits a more two dimensional electronic structure. At zero pressure the ground state is antiferromagnetically ordered. Under pressure the antiferromagnetic order is suppressed and a superconducting phase is induced, with a maximum Tc above a quantum critical point around 31 kbar. To investigate the changes in the Fermi Surface and effective electron masses around the quantum critical point, Shubnikov-de Haas measurements were conducted under high pressures in an anvil cell. The samples were micro-structured and contacted using a Focused Ion Beam (FIB). The Focused Ion Beam enables sample contacting and structuring down to a sub-micrometer scale, making the measurement of several samples with complex shapes and multiple contacts on a single anvil feasible.
Sur, Maitreyi; Belthoff, James R.; Bjerre, Emily R.; Millsap, Brian A.; Katzner, Todd
2018-01-01
Wind energy development is rapidly expanding in North America, often accompanied by requirements to survey potential facility locations for existing wildlife. Within the USA, golden eagles (Aquila chrysaetos) are among the most high-profile species of birds that are at risk from wind turbines. To minimize golden eagle fatalities in areas proposed for wind development, modified point count surveys are usually conducted to estimate use by these birds. However, it is not always clear what drives variation in the relationship between on-site point count data and actual use by eagles of a wind energy project footprint. We used existing GPS-GSM telemetry data, collected at 15 min intervals from 13 golden eagles in 2012 and 2013, to explore the relationship between point count data and eagle use of an entire project footprint. To do this, we overlaid the telemetry data on hypothetical project footprints and simulated a variety of point count sampling strategies for those footprints. We compared the time an eagle was found in the sample plots with the time it was found in the project footprint using a metric we called “error due to sampling”. Error due to sampling for individual eagles appeared to be influenced by interactions between the size of the project footprint (20, 40, 90 or 180 km2) and the sampling type (random, systematic or stratified) and was greatest on 90 km2 plots. However, use of random sampling resulted in lowest error due to sampling within intermediate sized plots. In addition sampling intensity and sampling frequency both influenced the effectiveness of point count sampling. Although our work focuses on individual eagles (not the eagle populations typically surveyed in the field), our analysis shows both the utility of simulations to identify specific influences on error and also potential improvements to sampling that consider the context-specific manner that point counts are laid out on the landscape.
Duester, Lars; Fabricius, Anne-Lena; Jakobtorweihen, Sven; Philippe, Allan; Weigl, Florian; Wimmer, Andreas; Schuster, Michael; Nazar, Muhammad Faizan
2016-11-01
Coacervate-based techniques are intensively used in environmental analytical chemistry to enrich and extract different kinds of analytes. Most methods focus on the total content or the speciation of inorganic and organic substances. Size fractionation is less commonly addressed. Within coacervate-based techniques, cloud point extraction (CPE) is characterized by a phase separation of non-ionic surfactants dispersed in an aqueous solution when the respective cloud point temperature is exceeded. In this context, the feature article raises the following question: May CPE in future studies serve as a key tool (i) to enrich and extract nanoparticles (NPs) from complex environmental matrices prior to analyses and (ii) to preserve the colloidal status of unstable environmental samples? With respect to engineered NPs, a significant gap between environmental concentrations and size- and element-specific analytical capabilities is still visible. CPE may support efforts to overcome this "concentration gap" via the analyte enrichment. In addition, most environmental colloidal systems are known to be unstable, dynamic, and sensitive to changes of the environmental conditions during sampling and sample preparation. This delivers a so far unsolved "sample preparation dilemma" in the analytical process. The authors are of the opinion that CPE-based methods have the potential to preserve the colloidal status of these instable samples. Focusing on NPs, this feature article aims to support the discussion on the creation of a convention called the "CPE extractable fraction" by connecting current knowledge on CPE mechanisms and on available applications, via the uncertainties visible and modeling approaches available, with potential future benefits from CPE protocols.
Molecular diagnosis of α-thalassemia in a multiethnic population.
Gilad, Oded; Shemer, Orna Steinberg; Dgany, Orly; Krasnov, Tanya; Nevo, Michal; Noy-Lotan, Sharon; Rabinowicz, Ron; Amitai, Nofar; Ben-Dor, Shifra; Yaniv, Isaac; Yacobovich, Joanne; Tamary, Hannah
2017-06-01
α-Thalassemia, one of the most common genetic diseases, is caused by deletions or point mutations affecting one to four α-globin genes. Molecular diagnosis is important to prevent the most severe forms of the disease. However, the diagnosis of α-thalassemia is complex due to a high variability of the genetic defects involved, with over 250 described mutations. We summarize herein the findings of genetic analyses of DNA samples referred to our laboratory for the molecular diagnosis of α-thalassemia, along with a detailed clinical description. We utilized a diagnostic algorithm including Gap-PCR, to detect known deletions, followed by sequencing of the α-globin gene, to identify known and novel point mutations, and multiplex ligation-dependent probe amplification (MLPA) for the diagnosis of rare or novel deletions. α-Thalassemia was diagnosed in 662 of 975 samples referred to our laboratory. Most commonly found were deletions (75.3%, including two novel deletions previously described by us); point mutations comprised 25.4% of the cases, including five novel mutations. Our population included mostly Jews (of Ashkenazi and Sephardic origin) and Muslim Arabs, who presented with a higher rate of point mutations and hemoglobin H disease. Overall, we detected 53 different genotype combinations causing a spectrum of clinical phenotypes, from asymptomatic to severe anemia. Our work constitutes the largest group of patients with α-thalassemia originating in the Mediterranean whose clinical characteristics and molecular basis have been determined. We suggest a diagnostic algorithm that leads to an accurate molecular diagnosis in multiethnic populations. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Genova, Alessandro; Pavanello, Michele
2015-12-16
In order to approximately satisfy the Bloch theorem, simulations of complex materials involving periodic systems are made n(k) times more complex by the need to sample the first Brillouin zone at n(k) points. By combining ideas from Kohn-Sham density-functional theory (DFT) and orbital-free DFT, for which no sampling is needed due to the absence of waves, subsystem DFT offers an interesting middle ground capable of sizable theoretical speedups against Kohn-Sham DFT. By splitting the supersystem into interacting subsystems, and mapping their quantum problem onto separate auxiliary Kohn-Sham systems, subsystem DFT allows an optimal topical sampling of the Brillouin zone. We elucidate this concept with two proof of principle simulations: a water bilayer on Pt[1 1 1]; and a complex system relevant to catalysis-a thiophene molecule physisorbed on a molybdenum sulfide monolayer deposited on top of an α-alumina support. For the latter system, a speedup of 300% is achieved against the subsystem DTF reference by using an optimized Brillouin zone sampling (600% against KS-DFT).
Mapping of bird distributions from point count surveys
Sauer, J.R.; Pendleton, G.W.; Orsillo, Sandra; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes in proportion counted as a function of observer or habitat differences. Large-scale surveys also generally suffer from regional and temporal variation in sampling intensity. A simulated surface is used to demonstrate sampling principles for maps.
Coarse Point Cloud Registration by Egi Matching of Voxel Clusters
NASA Astrophysics Data System (ADS)
Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo
2016-06-01
Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
Biomechanics of the incudo-malleolar-joint - Experimental investigations for quasi-static loads.
Ihrle, S; Gerig, R; Dobrev, I; Röösli, C; Sim, J H; Huber, A M; Eiber, A
2016-10-01
Under large quasi-static loads, the incudo-malleolar joint (IMJ), connecting the malleus and the incus, is highly mobile. It can be classified as a mechanical filter decoupling large quasi-static motions while transferring small dynamic excitations. This is presumed to be due to the complex geometry of the joint inducing a spatial decoupling between the malleus and incus under large quasi-static loads. Spatial Laser Doppler Vibrometer (LDV) displacement measurements on isolated malleus-incus-complexes (MICs) were performed. With the malleus firmly attached to a probe holder, the incus was excited by applying quasi-static forces at different points. For each force application point the resulting displacement was measured subsequently at different points on the incus. The location of the force application point and the LDV measurement points were calculated in a post-processing step combining the position of the LDV points with geometric data of the MIC. The rigid body motion of the incus was then calculated from the multiple displacement measurements for each force application point. The contact regions of the articular surfaces for different load configurations were calculated by applying the reconstructed motion to the geometry model of the MIC and calculate the minimal distance of the articular surfaces. The reconstructed motion has a complex spatial characteristic and varies for different force application points. The motion changed with increasing load caused by the kinematic guidance of the articular surfaces of the joint. The IMJ permits a relative large rotation around the anterior-posterior axis through the joint when a force is applied at the lenticularis in lateral direction before impeding the motion. This is part of the decoupling of the malleus motion from the incus motion in case of large quasi-static loads. Copyright © 2015 Elsevier B.V. All rights reserved.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Exploring revictimization risk in a community sample of sexual assault survivors.
Chu, Ann T; Deprince, Anne P; Mauss, Iris B
2014-01-01
Previous research points to links between risk detection (the ability to detect danger cues in various situations) and sexual revictimization in college women. Given important differences between college and community samples that may be relevant to revictimization risk (e.g., the complexity of trauma histories), the current study explored the link between risk detection and revictimization in a community sample of women. Community-recruited women (N = 94) reported on their trauma histories in a semistructured interview. In a laboratory session, participants listened to a dating scenario involving a woman and a man that culminated in sexual assault. Participants were instructed to press a button "when the man had gone too far." Unlike in college samples, revictimized community women (n = 47) did not differ in terms of risk detection response times from women with histories of no victimization (n = 10) or single victimization (n = 15). Data from this study point to the importance of examining revictimization in heterogeneous community samples where risk mechanisms may differ from college samples.
Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection
NASA Astrophysics Data System (ADS)
Kang, Z.; Lindenbergh, R.; Pu, S.
2016-06-01
This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.
NASA Astrophysics Data System (ADS)
Oba, Masaki; Miyabe, Masabumi; Akaoka, Katsuaki; Wakaida, Ikuo
2016-02-01
We used laser-induced fluorescence imaging with a varying beam focal point to observe ablation plumes from metal and oxide samples of gadolinium. The plumes expand vertically when the focal point is far from the sample surface. In contrast, the plume becomes hemispherical when the focal point is on the sample surface. In addition, the internal plume structure and the composition of the ablated atomic and ionic particles also vary significantly. The fluorescence intensity of a plume from a metal sample is greater than that from an oxide sample, which suggests that the number of monatomic species produced in each plume differs. For both the metal and oxide samples, the most intense fluorescence from atomic (ionic) species is observed with the beam focal point at 3-4 mm (2 mm) from the sample surface.
Leherte, Laurence; Vercauteren, Daniel P
2014-02-01
Reduced point charge models of amino acids are designed, (i) from local extrema positions in charge density distribution functions built from the Poisson equation applied to smoothed molecular electrostatic potential (MEP) functions, and (ii) from local maxima positions in promolecular electron density distribution functions. Corresponding charge values are fitted versus all-atom Amber99 MEPs. To easily generate reduced point charge models for protein structures, libraries of amino acid templates are built. The program GROMACS is used to generate stable Molecular Dynamics trajectories of an Ubiquitin-ligand complex (PDB: 1Q0W), under various implementation schemes, solvation, and temperature conditions. Point charges that are not located on atoms are considered as virtual sites with a nul mass and radius. The results illustrate how the intra- and inter-molecular H-bond interactions are affected by the degree of reduction of the point charge models and give directions for their implementation; a special attention to the atoms selected to locate the virtual sites and to the Coulomb-14 interactions is needed. Results obtained at various temperatures suggest that the use of reduced point charge models allows to probe local potential hyper-surface minima that are similar to the all-atom ones, but are characterized by lower energy barriers. It enables to generate various conformations of the protein complex more rapidly than the all-atom point charge representation. Copyright © 2013 Elsevier Inc. All rights reserved.
Rajaram, Kaushik; Losada-Pérez, Patricia; Vermeeren, Veronique; Hosseinkhani, Baharak; Wagner, Patrick; Somers, Veerle; Michiels, Luc
2015-01-01
Over the last three decades, phage display technology has been used for the display of target-specific biomarkers, peptides, antibodies, etc. Phage display-based assays are mostly limited to the phage ELISA, which is notorious for its high background signal and laborious methodology. These problems have been recently overcome by designing a dual-display phage with two different end functionalities, namely, streptavidin (STV)-binding protein at one end and a rheumatoid arthritis-specific autoantigenic target at the other end. Using this dual-display phage, a much higher sensitivity in screening specificities of autoantibodies in complex serum sample has been detected compared to single-display phage system on phage ELISA. Herein, we aimed to develop a novel, rapid, and sensitive dual-display phage to detect autoantibodies presence in serum samples using quartz crystal microbalance with dissipation monitoring as a sensing platform. The vertical functionalization of the phage over the STV-modified surfaces resulted in clear frequency and dissipation shifts revealing a well-defined viscoelastic signature. Screening for autoantibodies using antihuman IgG-modified surfaces and the dual-display phage with STV magnetic bead complexes allowed to isolate the target entities from complex mixtures and to achieve a large response as compared to negative control samples. This novel dual-display strategy can be a potential alternative to the time consuming phage ELISA protocols for the qualitative analysis of serum autoantibodies and can be taken as a departure point to ultimately achieve a point of care diagnostic system.
NASA Astrophysics Data System (ADS)
Kujawinski, E. B.; Longnecker, K.; Alexander, H.; Dyhrman, S.; Jenkins, B. D.; Rynearson, T. A.
2016-02-01
Phytoplankton blooms in coastal areas contribute a large fraction of primary production to the global oceans. Despite their central importance, there are fundamental unknowns in phytoplankton community metabolism, which limit the development of a more complete understanding of the carbon cycle. Within this complex setting, the tools of systems biology hold immense potential for profiling community metabolism and exploring links to the carbon cycle, but have rarely been applied together in this context. Here we focus on phytoplankton community samples collected from a model coastal system over a three-week period. At each sampling point, we combined two assessments of metabolic function: the meta-transcriptome, or the genes that are expressed by all organisms at each sampling point, and the metabolome, or the intracellular molecules produced during the community's metabolism. These datasets are inherently complementary, with gene expression likely to vary in concert with the concentrations of metabolic intermediates. Indeed, preliminary data show coherence in transcripts and metabolites associated with nutrient stress response and with fixed carbon oxidation. To date, these datasets are rarely integrated across their full complexity but together they provide unequivocal evidence of specific metabolic pathways by individual phytoplankton taxa, allowing a more comprehensive systems view of this dynamic environment. Future application of multi-omic profiling will facilitate a more complete understanding of metabolic reactions at the foundation of the carbon cycle.
NASA Astrophysics Data System (ADS)
Skala, Vaclav
2016-06-01
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.
Landgraf, M N; Biebl, J T; Langhagen, T; Hannibal, I; Eggert, T; Vill, K; Gerstl, L; Albers, L; von Kries, R; Straube, A; Heinen, F
2018-02-01
The objective was to evaluate a supposed clinical interdependency of myofascial trigger points and migraine in children. Such interdependency would support an interaction of spinal and trigeminal afferences in the trigemino-cervical complex as a contributing factor in migraine. Children ≤18 years with the confirmed diagnosis of migraine were prospectively investigated. Comprehensive data on medical history, clinical neurological and psychological status were gathered. Trigger points in the trapezius muscle were identified by palpation and the threshold of pressure pain at these points was measured. Manual pressure was applied to the trigger points, and the occurrence and duration of induced headache were recorded. At a second consultation (4 weeks after the first), manual pressure with the detected pressure threshold was applied to non-trigger points within the same trapezius muscle (control). Headache and related parameters were again recorded and compared to the results of the first consultation. A total of 13 girls and 13 boys with migraine and a median age of 14.5 (Range 6.3-17.8) years took part in the study. Manual pressure to trigger points in the trapezius muscle led to lasting headache after termination of the manual pressure in 13 patients while no patient experienced headache when manual pressure was applied to non-trigger points at the control visit (p < 0.001). Headache was induced significantly more often in children ≥12 years and those with internalizing behavioural disorder. We found an association between trapezius muscle myofascial trigger points and migraine, which might underline the concept of the trigemino-cervical complex, especially in adolescents. In children with migraine headache can often be induced by pressure to myofascial trigger points, but not by pressure to non-trigger points in the trapezius muscle. This supports the hypothesis of a trigemino-cervical-complex in the pathophysiology of migraine, which might have implications for innovative therapies in children with migraine. © 2017 European Pain Federation - EFIC®.
NASA Astrophysics Data System (ADS)
Kazami, Sou; Tsunogae, Toshiaki; Santosh, M.; Tsutsumi, Yukiyasu; Takamura, Yusuke
2016-11-01
The Lützow-Holm Complex (LHC) of East Antarctica forms part of a complex subduction-collision orogen related to the amalgamation of the Neoproterozoic supercontinent Gondwana. Here we report new petrological, geochemical, and geochronological data from a metamorphosed and disrupted layered igneous complex from Akarui Point in the LHC which provide new insights into the evolution of the complex. The complex is composed of mafic orthogneiss (edenite/pargasite + plagioclase ± clinopyroxene ± orthopyroxene ± spinel ± sapphirine ± K-feldspar), meta-ultramafic rock (pargasite + olivine + spinel + orthopyroxene), and felsic orthogneiss (plagioclase + quartz + pargasite + biotite ± garnet). The rocks show obvious compositional layering reflecting the chemical variation possibly through magmatic differentiation. The metamorphic conditions of the rocks were estimated using hornblende-plagioclase geothermometry which yielded temperatures of 720-840 °C. The geochemical data of the orthogneisses indicate fractional crystallization possibly related to differentiation within a magma chamber. Most of the mafic-ultramafic samples show enrichment of LILE, negative Nb, Ta, P and Ti anomalies, and constant HFSE contents in primitive-mantle normalized trace element plots suggesting volcanic arc affinity probably related to subduction. The enrichment of LREE and flat HREE patterns in chondrite-normalized REE plot, with the Nb-Zr-Y, Y-La-Nb, and Th/Yb-Nb/Yb plots also suggest volcanic arc affinity. The felsic orthogneiss plotted on Nb/Zr-Zr diagram (low Nb/Zr ratio) and spider diagrams (enrichment of LILE, negative Nb, Ta, P and Ti anomalies) also show magmatic arc origin. The morphology, internal structure, and high Th/U ratio of zircon grains in felsic orthogneiss are consistent with magmatic origin for most of these grains. Zircon U-Pb analyses suggest Early Neoproterozoic (847.4 ± 8.0 Ma) magmatism and protolith formation. Some older grains (1026-882 Ma) are regarded as xenocrysts from basement entrained in the magma through limited crustal reworking. The younger ages (807-667 Ma) might represent subsequent thermal events. The results of this study suggest that the ca. 850 Ma layered igneous complex in Akarui Point was derived from a magma chamber constructed through arc-related magmatism which included components from ca. 1.0 Ga felsic continental crustal basement. The geochemical characteristics and the timing of protolith emplacement from this complex are broadly identical to those of similar orthogneisses from Kasumi Rock and Tama Point in the LHC and the Kadugannawa Complex in Sri Lanka, which record Early Neoproterozoic (ca. 1.0 Ga) arc magmatism. Although the magmatic event in Akarui Point is slightly younger, the thermal event probably continued from ca. 1.0 Ga to ca. 850 Ma or even to ca. 670 Ma. We therefore correlate the Akarui Point igneous complex with those in the LHC and Kadugannawa Complex formed under similar Early Neoproterozoic arc magmatic events during the convergent margin processes prior to the assembly of the Gondwana supercontinent.
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
Sm-Nd isotopic systematics of the ancient Gneiss complex, southern Africa
NASA Technical Reports Server (NTRS)
Carlson, R. W.; Hunter, D. R.; Barker, F.
1983-01-01
In order to shed some new light on the question of the absolute and relative ages of the Ancient Gneiss Complex and Onverwacht Group, a Sm-Nd whole-rock and mineral isochron study of the AGC was begun. At this point, the whole-rock study of samples from the Bimodal Suite selected from those studied for their geochemical characteristics by Hunter et al., is completed. These results and their implications for the chronologic evolution of the Kaapvaal craton and the sources of these ancient rocks are discussed.
ERIC Educational Resources Information Center
Harris, David; Gomez Zwiep, Susan
2013-01-01
Graphs represent complex information. They show relationships and help students see patterns and compare data. Students often do not appreciate the illuminating power of graphs, interpreting them literally rather than as symbolic representations (Leinhardt, Zaslavsky, and Stein 1990). Students often read graphs point by point instead of seeing…
Inhomogeneous point-process entropy: An instantaneous measure of complexity in discrete systems
NASA Astrophysics Data System (ADS)
Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-01
Measures of entropy have been widely used to characterize complexity, particularly in physiological dynamical systems modeled in discrete time. Current approaches associate these measures to finite single values within an observation window, thus not being able to characterize the system evolution at each moment in time. Here, we propose a new definition of approximate and sample entropy based on the inhomogeneous point-process theory. The discrete time series is modeled through probability density functions, which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through probability functions, the novel indices are able to provide instantaneous tracking of the system complexity. The new measures are tested on synthetic data, as well as on real data gathered from heartbeat dynamics of healthy subjects and patients with cardiac heart failure and gait recordings from short walks of young and elderly subjects. Results show that instantaneous complexity is able to effectively track the system dynamics and is not affected by statistical noise properties.
THE SCREENING AND RANKING ALGORITHM FOR CHANGE-POINTS DETECTION IN MULTIPLE SAMPLES
Song, Chi; Min, Xiaoyi; Zhang, Heping
2016-01-01
The chromosome copy number variation (CNV) is the deviation of genomic regions from their normal copy number states, which may associate with many human diseases. Current genetic studies usually collect hundreds to thousands of samples to study the association between CNV and diseases. CNVs can be called by detecting the change-points in mean for sequences of array-based intensity measurements. Although multiple samples are of interest, the majority of the available CNV calling methods are single sample based. Only a few multiple sample methods have been proposed using scan statistics that are computationally intensive and designed toward either common or rare change-points detection. In this paper, we propose a novel multiple sample method by adaptively combining the scan statistic of the screening and ranking algorithm (SaRa), which is computationally efficient and is able to detect both common and rare change-points. We prove that asymptotically this method can find the true change-points with almost certainty and show in theory that multiple sample methods are superior to single sample methods when shared change-points are of interest. Additionally, we report extensive simulation studies to examine the performance of our proposed method. Finally, using our proposed method as well as two competing approaches, we attempt to detect CNVs in the data from the Primary Open-Angle Glaucoma Genes and Environment study, and conclude that our method is faster and requires less information while our ability to detect the CNVs is comparable or better. PMID:28090239
Dynamical analysis of a fractional SIR model with birth and death on heterogeneous complex networks
NASA Astrophysics Data System (ADS)
Huo, Jingjing; Zhao, Hongyong
2016-04-01
In this paper, a fractional SIR model with birth and death rates on heterogeneous complex networks is proposed. Firstly, we obtain a threshold value R0 based on the existence of endemic equilibrium point E∗, which completely determines the dynamics of the model. Secondly, by using Lyapunov function and Kirchhoff's matrix tree theorem, the globally asymptotical stability of the disease-free equilibrium point E0 and the endemic equilibrium point E∗ of the model are investigated. That is, when R0 < 1, the disease-free equilibrium point E0 is globally asymptotically stable and the disease always dies out; when R0 > 1, the disease-free equilibrium point E0 becomes unstable and in the meantime there exists a unique endemic equilibrium point E∗, which is globally asymptotically stable and the disease is uniformly persistent. Finally, the effects of various immunization schemes are studied and compared. Numerical simulations are given to demonstrate the main results.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
NASA Astrophysics Data System (ADS)
Ge, Xuming
2017-08-01
The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.
NASA Astrophysics Data System (ADS)
Jorge-Villar, Susana E.; Edwards, Howell G. M.
2013-03-01
Raman spectroscopy is a valuable analytical technique for the identification of biomolecules and minerals in natural samples, which involves little or minimal sample manipulation. In this paper, we evaluate the advantages and disadvantages of this technique applied to the study of extremophiles. Furthermore, we provide a review of the results published, up to the present point in time, of the bio- and geo-strategies adopted by different types of extremophile colonies of microorganisms. We also show the characteristic Raman signatures for the identification of pigments and minerals, which appear in those complex samples.
Nitric Oxide Measurement Study. Volume II. Probe Methods,
1980-05-01
case of the Task I study, it should be pointed out that at lower gas temperatures where much of the study was performed, the mass flow through the...third body as pointed out by Matthews, et al. (1977) but also dependent on the viscosity of the sampled gas for standard commercial units (Folsom and...substantially above the dew point (based on the maximum pressure in the sampling system and the initial water concentration) or (2) sample line and
Dobson, Ian; Carreras, Benjamin A; Lynch, Vickie E; Newman, David E
2007-06-01
We give an overview of a complex systems approach to large blackouts of electric power transmission systems caused by cascading failure. Instead of looking at the details of particular blackouts, we study the statistics and dynamics of series of blackouts with approximate global models. Blackout data from several countries suggest that the frequency of large blackouts is governed by a power law. The power law makes the risk of large blackouts consequential and is consistent with the power system being a complex system designed and operated near a critical point. Power system overall loading or stress relative to operating limits is a key factor affecting the risk of cascading failure. Power system blackout models and abstract models of cascading failure show critical points with power law behavior as load is increased. To explain why the power system is operated near these critical points and inspired by concepts from self-organized criticality, we suggest that power system operating margins evolve slowly to near a critical point and confirm this idea using a power system model. The slow evolution of the power system is driven by a steady increase in electric loading, economic pressures to maximize the use of the grid, and the engineering responses to blackouts that upgrade the system. Mitigation of blackout risk should account for dynamical effects in complex self-organized critical systems. For example, some methods of suppressing small blackouts could ultimately increase the risk of large blackouts.
Donnenwerth, Michael P; Roukis, Thomas S
2013-04-01
Failed total ankle replacement is a complex problem that should only be treated by experienced foot and ankle surgeons. Significant bone loss can preclude revision total ankle replacement and obligate revision though a complex tibio-talo-calcaneal arthrodesis. A systematic review of the world literature reveals a nonunion rate of 24.2%. A weighted mean of modified American Orthopaedic Foot and Ankle Society Ankle and Hindfoot Scale demonstrated fair patient outcomes of 58.1 points on an 86-point scale (67.6 points on a 100-point scale). Complications were observed in 38 of 62 (62.3%) patients reviewed, with the most common complication being nonunion. Copyright © 2013 Elsevier Inc. All rights reserved.
Vanishing Point Extraction and Refinement for Robust Camera Calibration
Tsai, Fuan
2017-01-01
This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966
Symmetric and Asymmetric Tendencies in Stable Complex Systems
Tan, James P. L.
2016-01-01
A commonly used approach to study stability in a complex system is by analyzing the Jacobian matrix at an equilibrium point of a dynamical system. The equilibrium point is stable if all eigenvalues have negative real parts. Here, by obtaining eigenvalue bounds of the Jacobian, we show that stable complex systems will favor mutualistic and competitive relationships that are asymmetrical (non-reciprocative) and trophic relationships that are symmetrical (reciprocative). Additionally, we define a measure called the interdependence diversity that quantifies how distributed the dependencies are between the dynamical variables in the system. We find that increasing interdependence diversity has a destabilizing effect on the equilibrium point, and the effect is greater for trophic relationships than for mutualistic and competitive relationships. These predictions are consistent with empirical observations in ecology. More importantly, our findings suggest stabilization algorithms that can apply very generally to a variety of complex systems. PMID:27545722
Symmetric and Asymmetric Tendencies in Stable Complex Systems.
Tan, James P L
2016-08-22
A commonly used approach to study stability in a complex system is by analyzing the Jacobian matrix at an equilibrium point of a dynamical system. The equilibrium point is stable if all eigenvalues have negative real parts. Here, by obtaining eigenvalue bounds of the Jacobian, we show that stable complex systems will favor mutualistic and competitive relationships that are asymmetrical (non-reciprocative) and trophic relationships that are symmetrical (reciprocative). Additionally, we define a measure called the interdependence diversity that quantifies how distributed the dependencies are between the dynamical variables in the system. We find that increasing interdependence diversity has a destabilizing effect on the equilibrium point, and the effect is greater for trophic relationships than for mutualistic and competitive relationships. These predictions are consistent with empirical observations in ecology. More importantly, our findings suggest stabilization algorithms that can apply very generally to a variety of complex systems.
NASA Astrophysics Data System (ADS)
Eduardo Virgilio Silva, Luiz; Otavio Murta, Luiz
2012-12-01
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiffmax) for q ≠1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiffmax values were capable of distinguish HRV groups (p-values 5.10×10-3, 1.11×10-7, and 5.50×10-7 for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis.
NASA Astrophysics Data System (ADS)
Kang, Zhizhong
2013-10-01
This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.
Ting, Tan Xue; Hashim, Rohaidah; Ahmad, Norazah; Abdullah, Khairul Hafizi
2013-01-01
Pertussis or whooping cough is a highly infectious respiratory disease caused by Bordetella pertussis. In vaccinating countries, infants, adolescents, and adults are relevant patients groups. A total of 707 clinical specimens were received from major hospitals in Malaysia in year 2011. These specimens were cultured on Regan-Lowe charcoal agar and subjected to end-point PCR, which amplified the repetitive insertion sequence IS481 and pertussis toxin promoter gene. Out of these specimens, 275 were positive: 4 by culture only, 6 by both end-point PCR and culture, and 265 by end-point PCR only. The majority of the positive cases were from ≤3 months old patients (77.1%) (P < 0.001). There was no significant association between type of samples collected and end-point PCR results (P > 0.05). Our study showed that the end-point PCR technique was able to pick up more positive cases compared to culture method.
Pointo - a Low Cost Solution to Point Cloud Processing
NASA Astrophysics Data System (ADS)
Houshiar, H.; Winkler, S.
2017-11-01
With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.
NASA Astrophysics Data System (ADS)
Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce
2015-09-01
The hybrid point-source/wave-field method is a newly proposed approach for Computer-Generated Hologram (CGH) calculation, based on the slicing of the scene into several depth layers parallel to the hologram plane. The complex wave scattered by each depth layer is then computed using either a wave-field or a point-source approach according to a threshold criterion on the number of points within the layer. Finally, the complex waves scattered by all the depth layers are summed up in order to obtain the final CGH. Although outperforming both point-source and wave-field methods without producing any visible artifact, this approach has not yet been used for animated holograms, and the possible exploitation of temporal redundancies has not been studied. In this paper, we propose a fast computation of video holograms by taking into account those redundancies. Our algorithm consists of three steps. First, intensity and depth data of the current 3D video frame are extracted and compared with those of the previous frame in order to remove temporally redundant data. Then the CGH pattern for this compressed frame is generated using the hybrid point-source/wave-field approach. The resulting CGH pattern is finally transmitted to the video output and stored in the previous frame buffer. Experimental results reveal that our proposed method is able to produce video holograms at interactive rates without producing any visible artifact.
Effect of point defects on the amorphization of metallic alloys during ion implantation. [NiTi
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedraza, D.F.; Mansur, L.K.
1985-01-01
A theoretical model of radiation-induced amorphization of ordered intermetallic compounds is developed. The mechanism is proposed to be the buildup of lattice defects to very high concentrations, which destabilizes the crystalline structure. Because simple point defects do not normally reach such levels during irradiation, a new defect complex containing a vacancy and an interstitial is hypothesized. Crucial properties of the complex are that the interstitial sees a local chemical environment similar to that of an atom in the ordered lattice, that the formation of the complex prevents mutual recombination and that the complex is immobile. The evolution of a disordermore » based on complexes is not accompanied by like point defect aggregation. The latter leads to the development of a sink microstructure in alloys that do not become amorphous. For electron irradiation, the complexes form by diffusional encounters. For ion irradiation, complexes are also formed directly in cascades. The possibility of direct amorphization in cascades is also included. Calculations for the compound NiTi show reasonable agreement with measured amorphization kinetics.« less
NASA Astrophysics Data System (ADS)
Vázquez Tarrío, Daniel; Borgniet, Laurent; Recking, Alain; Liebault, Frédéric; Vivier, Marie
2016-04-01
The present research is focused on the Vénéon river at Plan du Lac (Massif des Ecrins, France), an alpine braided gravel bed stream with a glacio-nival hydrological regime. It drains a catchment area of 316 km2. The present research is focused in a 2.5 km braided reach placed immediately upstream of a small hydropower dam. An airbone LIDAR survey was accomplished in October, 2014 by EDF (the company managing the small hydropower dam), and data coming from this LIDAR survey were available for the present research. Point density of the LIDAR-derived 3D-point cloud was between 20-50 points/m2, with a vertical precision of 2-3 cm over flat surfaces. Moreover, between April and Juin, 2015, we carried out a photogrammetrical campaign based in aerial images taken with an UAV-drone. The UAV-derived point-cloud has a point density of 200-300 points/m2, and a vertical precision over flat control surfaces comparable to that of the LIDAR point cloud (2-3 cm). Simultaneously to the UAV campaign, we took several Wolman samples with the aim of characterizing the grain size distribution of bed sediment. Wolman samples were taken following a geomorphological criterion (unit bars, head/tail of compound bars). Furthermore, some of the Wolman samples were repeated with the aim of defining the uncertainty of our sampling protocol. LIDAR and UAV-derived point clouds were treated in order to check whether both point-clouds were correctly co-aligned. After that, we estimated bed roughness using the detrended standard deviation of heights, in a 40-cm window. For all this data treatment we used CloudCompare. Then, we measured the distribution of roughness in the same geomorphological units where we took the Wolman samples, and we compared with the grain size distributions measured in the field: differences between UAV-point cloud roughness distributions and measured-grain size distribution (~1-2 cm) are in the same order of magnitude of the differences found between the repeated Wolman samples (~0.5-1.5 cm). Differences with LIDAR-derived roughness distributions are only slightly higher, which could be due to the lower point density of the LIDAR point clouds.
Alberti, Giancarla; Biesuz, Raffaela; Pesavento, Maria
2008-12-01
Different natural water samples were investigated to determine the total concentration and the distribution of species for Cu(II), Pb(II), Al(III) and U(VI). The proposed method, named resin titration (RT), was developed in our laboratory to investigate the distribution of species for metal ions in complex matrices. It is a competition method, in which a complexing resin competes with natural ligands present in the sample to combine with the metal ions. In the present paper, river, estuarine and seawater samples, collected during a cruise in Adriatic Sea, were investigated. For each sample, two RTs were performed, using different complexing resins: the iminodiacetic Chelex 100 and the carboxylic Amberlite CG50. In this way, it was possible to detect different class of ligands. Satisfactory results have been obtained and are commented on critically. They were summarized by principal component analysis (PCA) and the correlations with physicochemical parameters allowed one to follow the evolution of the metals along the considered transect. It should be pointed out that, according to our findings, the ligands responsible for metal ions complexation are not the major components of the water system, since they form considerably weaker complexes.
Monte Carlo approaches to sampling forested tracts with lines or points
Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire
2001-01-01
Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...
Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations
NASA Astrophysics Data System (ADS)
Mirloo, Mahsa; Ebrahimnezhad, Hosein
2018-03-01
In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.
Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location
NASA Astrophysics Data System (ADS)
Zhao, A. H.
2014-12-01
Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.
Grey W. Pendleton
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation...
Yang, Xiupei; Jia, Zhihui; Yang, Xiaocui; Li, Gu; Liao, Xiangjun
2017-03-01
A cloud point extraction (CPE) method was used as a pre-concentration strategy prior to the determination of trace levels of silver in water by flame atomic absorption spectrometry (FAAS) The pre-concentration is based on the clouding phenomena of non-ionic surfactant, triton X-114, with Ag (I)/diethyldithiocarbamate (DDTC) complexes in which the latter is soluble in a micellar phase composed by the former. When the temperature increases above its cloud point, the Ag (I)/DDTC complexes are extracted into the surfactant-rich phase. The factors affecting the extraction efficiency including pH of the aqueous solution, concentration of the DDTC, amount of the surfactant, incubation temperature and time were investigated and optimized. Under the optimal experimental conditions, no interference was observed for the determination of 100 ng·mL -1 Ag + in the presence of various cations below their maximum concentrations allowed in this method, for instance, 50 μg·mL -1 for both Zn 2+ and Cu 2+ , 80 μg·mL -1 for Pb 2+ , 1000 μg·mL -1 for Mn 2+ , and 100 μg·mL -1 for both Cd 2+ and Ni 2+ . The calibration curve was linear in the range of 1-500 ng·mL -1 with a limit of detection (LOD) at 0.3 ng·mL -1 . The developed method was successfully applied for the determination of trace levels of silver in water samples such as river water and tap water.
López-García, Ignacio; Vicente-Martínez, Yesica; Hernández-Córdoba, Manuel
2015-01-01
The cloud point extraction (CPE) of silver nanoparticles (AgNPs) by Triton X-114 allows chromium (III) ions to be transferred to the surfactant-rich phase, where they can be measured by electrothermal atomic absorption spectrometry. Using 20 mL sample and 50 μL Triton X-114 (30% w/v), the enrichment factor was 1150, and calibration graphs were obtained in the 5-100 ng L(-1) chromium range in the presence of 5 µg L(-1) AgNPs. Speciation of trivalent and hexavalent chromium was achieved by carrying out two CPE experiments, one of them in the presence of ethylenediaminetetraacetate. While in the first experiment, in absence of the complexing agent, the concentration of total chromium was obtained, the analytical signal measured in the presence of this chemical allowed the chromium (VI) concentration to be measured, being that of chromium (III) calculated by difference. The reliability of the procedure was verified by using three standard reference materials before applying to water, beer and wine samples. Copyright © 2014 Elsevier B.V. All rights reserved.
DUCTILE-PHASE TOUGHENED TUNGSTEN FOR PLASMA-FACING MATERIALS IN FUSION REACTORS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henager, Charles H.; Setyawan, Wahyu; Roosendaal, Timothy J.
2017-05-01
Tungsten (W) and W-alloys are the leading candidates for plasma-facing components in nuclear fusion reactor designs because of their high melting point, strength retention at high temperatures, high thermal conductivity, and low sputtering yield. However, tungsten is brittle and does not exhibit the required fracture toughness for licensing in nuclear applications. A promising approach to increasing fracture toughness of W-alloys is by ductile-phase toughening (DPT). In this method, a ductile phase is included in a brittle matrix to prevent on inhibit crack propagation by crack blunting, crack bridging, crack deflection, and crack branching. Model examples of DPT tungsten are exploredmore » in this study, including W-Cu and W-Ni-Fe powder product composites. Three-point and four-point notched and/or pre-cracked bend samples were tested at several strain rates and temperatures to help understand deformation, cracking, and toughening in these materials. Data from these tests are used for developing and calibrating crack-bridging models. Finite element damage mechanics models are introduced as a modeling method that appears to capture the complexity of crack growth in these materials.« less
"Paper Machine" for Molecular Diagnostics.
Connelly, John T; Rolland, Jason P; Whitesides, George M
2015-08-04
Clinical tests based on primer-initiated amplification of specific nucleic acid sequences achieve high levels of sensitivity and specificity. Despite these desirable characteristics, these tests have not reached their full potential because their complexity and expense limit their usefulness to centralized laboratories. This paper describes a device that integrates sample preparation and loop-mediated isothermal amplification (LAMP) with end point detection using a hand-held UV source and camera phone. The prototype device integrates paper microfluidics (to enable fluid handling) and a multilayer structure, or a "paper machine", that allows a central patterned paper strip to slide in and out of fluidic path and thus allows introduction of sample, wash buffers, amplification master mix, and detection reagents with minimal pipetting, in a hand-held, disposable device intended for point-of-care use in resource-limited environments. This device creates a dynamic seal that prevents evaporation during incubation at 65 °C for 1 h. This interval is sufficient to allow a LAMP reaction for the Escherichia coli malB gene to proceed with an analytical sensitivity of 1 double-stranded DNA target copy. Starting with human plasma spiked with whole, live E. coli cells, this paper demonstrates full integration of sample preparation with LAMP amplification and end point detection with a limit of detection of 5 cells. Further, it shows that the method used to prepare sample enables concentration of DNA from sample volumes commonly available from fingerstick blood draw.
Castor, José Martín Rosas; Portugal, Lindomar; Ferrer, Laura; Hinojosa-Reyes, Laura; Guzmán-Mar, Jorge Luis; Hernández-Ramírez, Aracely; Cerdà, Víctor
2016-08-01
A simple, inexpensive and rapid method was proposed for the determination of bioaccessible arsenic in corn and rice samples using an in vitro bioaccessibility assay. The method was based on the preconcentration of arsenic by cloud point extraction (CPE) using o,o-diethyldithiophosphate (DDTP) complex, which was generated from an in vitro extract using polyethylene glycol tert-octylphenyl ether (Triton X-114) as a surfactant prior to its detection by atomic fluorescence spectrometry with a hydride generation system (HG-AFS). The CPE method was optimized by a multivariate approach (two-level full factorial and Doehlert designs). A photo-oxidation step of the organic species prior to HG-AFS detection was included for the accurate quantification of the total As. The limit of detection was 1.34μgkg(-1) and 1.90μgkg(-1) for rice and corn samples, respectively. The accuracy of the method was confirmed by analyzing certified reference material ERM BC-211 (rice powder). The corn and rice samples that were analyzed showed a high bioaccessible arsenic content (72-88% and 54-96%, respectively), indicating a potential human health risk. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
GENERAL: Bursting Ca2+ Oscillations and Synchronization in Coupled Cells
NASA Astrophysics Data System (ADS)
Ji, Quan-Bao; Lu, Qi-Shao; Yang, Zhuo-Qin; Duan, Li-Xia
2008-11-01
A mathematical model proposed by Grubelnk et al. [Biophys. Chew,. 94 (2001) 59] is employed to study the physiological role of mitochondria and the cytosolic proteins in generating complex Ca2+ oscillations. Intracel-lular bursting calcium oscillations of point-point, point-cycle and two-folded limit cycle types are observed and explanations are given based on the fast/slow dynamical analysis, especially for point-cycle and two-folded limit cycle types, which have not been reported before. Furthermore, synchronization of coupled bursters of Ca2+ oscillations via gap junctions and the effect of bursting types on synchronization of coupled cells are studied. It is argued that bursting oscillations of point-point type may be superior to achieve synchronization than that of point-cycle type.
Adams, David T.; Langer, William H.; Hoefen, Todd M.; Van Gosen, Bradley S.; Meeker, Gregory P.
2010-01-01
Natural background levels of Libby-type amphibole in the sediment of the Libby valley in Montana have not, up to this point, been determined. The purpose of this report is to provide the preliminary findings of a study designed by both the U.S. Geological Survey and the U.S. Environmental Protection Agency and performed by the U.S. Geological Survey. The study worked to constrain the natural background levels of fibrous amphiboles potentially derived from the nearby Rainy Creek Complex. The material selected for this study was sampled from three localities, two of which are active open-pit sand and gravel mines. Seventy samples were collected in total and examined using a scanning electron microscope equipped with an energy dispersive x-ray spectrometer. All samples contained varying amounts of feldspars, ilmenite, magnetite, quartz, clay minerals, pyroxene minerals, and non-fibrous amphiboles such as tremolite, actinolite, and magnesiohornblende. Of the 70 samples collected, only three had detectable levels of fibrous amphiboles compatible with those found in the rainy creek complex. The maximum concentration, identified here, of the amphiboles potentially from the Rainy Creek Complex is 0.083 percent by weight.
Wilms, M; Werner, R; Blendowski, M; Ortmüller, J; Handels, H
2014-01-01
A major problem associated with the irradiation of thoracic and abdominal tumors is respiratory motion. In clinical practice, motion compensation approaches are frequently steered by low-dimensional breathing signals (e.g., spirometry) and patient-specific correspondence models, which are used to estimate the sought internal motion given a signal measurement. Recently, the use of multidimensional signals derived from range images of the moving skin surface has been proposed to better account for complex motion patterns. In this work, a simulation study is carried out to investigate the motion estimation accuracy of such multidimensional signals and the influence of noise, the signal dimensionality, and different sampling patterns (points, lines, regions). A diffeomorphic correspondence modeling framework is employed to relate multidimensional breathing signals derived from simulated range images to internal motion patterns represented by diffeomorphic non-linear transformations. Furthermore, an automatic approach for the selection of optimal signal combinations/patterns within this framework is presented. This simulation study focuses on lung motion estimation and is based on 28 4D CT data sets. The results show that the use of multidimensional signals instead of one-dimensional signals significantly improves the motion estimation accuracy, which is, however, highly affected by noise. Only small differences exist between different multidimensional sampling patterns (lines and regions). Automatically determined optimal combinations of points and lines do not lead to accuracy improvements compared to results obtained by using all points or lines. Our results show the potential of multidimensional breathing signals derived from range images for the model-based estimation of respiratory motion in radiation therapy.
Comparison of Point Matching Techniques for Road Network Matching
NASA Astrophysics Data System (ADS)
Hackeloeer, A.; Klasing, K.; Krisp, J. M.; Meng, L.
2013-05-01
Map conflation investigates the unique identification of geographical entities across different maps depicting the same geographic region. It involves a matching process which aims to find commonalities between geographic features. A specific subdomain of conflation called Road Network Matching establishes correspondences between road networks of different maps on multiple layers of abstraction, ranging from elementary point locations to high-level structures such as road segments or even subgraphs derived from the induced graph of a road network. The process of identifying points located on different maps by means of geometrical, topological and semantical information is called point matching. This paper provides an overview of various techniques for point matching, which is a fundamental requirement for subsequent matching steps focusing on complex high-level entities in geospatial networks. Common point matching approaches as well as certain combinations of these are described, classified and evaluated. Furthermore, a novel similarity metric called the Exact Angular Index is introduced, which considers both topological and geometrical aspects. The results offer a basis for further research on a bottom-up matching process for complex map features, which must rely upon findings derived from suitable point matching algorithms. In the context of Road Network Matching, reliable point matches provide an immediate starting point for finding matches between line segments describing the geometry and topology of road networks, which may in turn be used for performing a structural high-level matching on the network level.
A generalized approach to computer synthesis of digital holograms
NASA Technical Reports Server (NTRS)
Hopper, W. A.
1973-01-01
Hologram is constructed by taking number of digitized sample points and blending them together to form ''continuous'' picture. New system selects better set of sample points resulting in improved hologram from same amount of information.
Altunay, Nail; Gürkan, Ramazan
2015-05-15
A new cloud-point extraction (CPE) for the determination of antimony species in biological and beverages samples has been established with flame atomic absorption spectrometry (FAAS). The method is based on the fact that formation of the competitive ion-pairing complex of Sb(III) and Sb(V) with Victoria Pure Blue BO (VPB(+)) at pH 10. The antimony species were individually detected by FAAS. Under the optimized conditions, the calibration range for Sb(V) is 1-250 μg L(-1) with a detection limit of 0.25 μg L(-1) and sensitive enhancement factor of 76.3 while the calibration range for Sb(III) is 10-400 μg L(-1) with a detection limit of 5.15 μg L(-1) and sensitive enhancement factor of 48.3. The precision as a relative standard deviation is in range of 0.24-2.35%. The method was successfully applied to the speciative determination of antimony species in the samples. The validation was verified by analysis of certified reference materials (CRMs). Copyright © 2014 Elsevier Ltd. All rights reserved.
Gürkan, Ramazan; Kır, Ufuk; Altunay, Nail
2015-08-01
The determination of inorganic arsenic species in water, beverages and foods become crucial in recent years, because arsenic species are considered carcinogenic and found at high concentrations in the samples. This communication describes a new cloud-point extraction (CPE) method for the determination of low quantity of arsenic species in the samples, purchased from the local market by UV-Visible Spectrophotometer (UV-Vis). The method is based on selective ternary complex of As(V) with acridine orange (AOH(+)) being a versatile fluorescence cationic dye in presence of tartaric acid and polyethylene glycol tert-octylphenyl ether (Triton X-114) at pH 5.0. Under the optimized conditions, a preconcentration factor of 65 and detection limit (3S blank/m) of 1.14 μg L(-1) was obtained from the calibration curve constructed in the range of 4-450 μg L(-1) with a correlation coefficient of 0.9932 for As(V). The method is validated by the analysis of certified reference materials (CRMs). Copyright © 2015 Elsevier Ltd. All rights reserved.
Higher order correlations of IRAS galaxies
NASA Technical Reports Server (NTRS)
Meiksin, Avery; Szapudi, Istvan; Szalay, Alexander
1992-01-01
The higher order irreducible angular correlation functions are derived up to the eight-point function, for a sample of 4654 IRAS galaxies, flux-limited at 1.2 Jy in the 60 microns band. The correlations are generally found to be somewhat weaker than those for the optically selected galaxies, consistent with the visual impression of looser clusters in the IRAS sample. It is found that the N-point correlation functions can be expressed as the symmetric sum of products of N - 1 two-point functions, although the correlations above the four-point function are consistent with zero. The coefficients are consistent with the hierarchical clustering scenario as modeled by Hamilton and by Schaeffer.
Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-07-28
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.
Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation
NASA Astrophysics Data System (ADS)
Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.
2017-05-01
In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.
Abbasi Tarighat, Maryam; Nabavi, Masoume; Mohammadizadeh, Mohammad Reza
2015-06-15
A new multi-component analysis method based on zero-crossing point-continuous wavelet transformation (CWT) was developed for simultaneous spectrophotometric determination of Cu(2+) and Pb(2+) ions based on the complex formation with 2-benzyl espiro[isoindoline-1,5 oxasolidine]-2,3,4 trione (BSIIOT). The absorption spectra were evaluated with respect to synthetic ligand concentration, time of complexation and pH. Therefore according the absorbance values, 0.015 mmol L(-1) BSIIOT, 10 min after mixing and pH 8.0 were used as optimum values. The complex formation between BSIIOT ligand and the cations Cu(2+) and Pb(2+) by application of rank annihilation factor analysis (RAFA) were investigated. Daubechies-4 (db4), discrete Meyer (dmey), Morlet (morl) and Symlet-8 (sym8) continuous wavelet transforms for signal treatments were found to be suitable among the wavelet families. The applicability of new synthetic ligand and selected mother wavelets were used for the simultaneous determination of strongly overlapped spectra of species without using any pre-chemical treatment. Therefore, CWT signals together with zero crossing technique were directly applied to the overlapping absorption spectra of Cu(2+) and Pb(2+). The calibration graphs for estimation of Pb(2+) and Cu (2+)were obtained by measuring the CWT amplitudes at zero crossing points for Cu(2+) and Pb(2+) at the wavelet domain, respectively. The proposed method was validated by simultaneous determination of Cu(2+) and Pb(2+) ions in red beans, walnut, rice, tea and soil samples. The obtained results of samples with proposed method have been compared with those predicted by partial least squares (PLS) and flame atomic absorption spectrophotometry (FAAS). Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nieuwoudt, Michel K.; Holroyd, Steve E.; McGoverin, Cushla M.; Simpson, M. Cather; Williams, David E.
2017-02-01
Point-of-care diagnostics are of interest in the medical, security and food industry, the latter particularly for screening food adulterated for economic gain. Milk adulteration continues to be a major problem worldwide and different methods to detect fraudulent additives have been investigated for over a century. Laboratory based methods are limited in their application to point-of-collection diagnosis and also require expensive instrumentation, chemicals and skilled technicians. This has encouraged exploration of spectroscopic methods as more rapid and inexpensive alternatives. Raman spectroscopy has excellent potential for screening of milk because of the rich complexity inherent in its signals. The rapid advances in photonic technologies and fabrication methods are enabling increasingly sensitive portable mini-Raman systems to be placed on the market that are both affordable and feasible for both point-of-care and point-of-collection applications. We have developed a powerful spectroscopic method for rapidly screening liquid milk for sucrose and four nitrogen-rich adulterants (dicyandiamide (DCD), ammonium sulphate, melamine, urea), using a combined system: a small, portable Raman spectrometer with focusing fibre optic probe and optimized reflective focusing wells, simply fabricated in aluminium. The reliable sample presentation of this system enabled high reproducibility of 8% RSD (residual standard deviation) within four minutes. Limit of detection intervals for PLS calibrations ranged between 140 - 520 ppm for the four N-rich compounds and between 0.7 - 3.6 % for sucrose. The portability of the system and reliability and reproducibility of this technique opens opportunities for general, reagentless adulteration screening of biological fluids as well as milk, at point-of-collection.
Communicating to Learn: Infants' Pointing Gestures Result in Optimal Learning
ERIC Educational Resources Information Center
Lucca, Kelsey; Wilbourn, Makeba Parramore
2018-01-01
Infants' pointing gestures are a critical predictor of early vocabulary size. However, it remains unknown precisely how pointing relates to word learning. The current study addressed this question in a sample of 108 infants, testing one mechanism by which infants' pointing may influence their learning. In Study 1, 18-month-olds, but not…
Zhang, Bin Bin; Shi, Yi; Chen, Hui; Zhu, Qing Xia; Lu, Feng; Li, Ying Wei
2018-01-02
By coupling surface-enhanced Raman spectroscopy (SERS) with thin-layer chromatography (TLC), a powerful method for detecting complex samples was successfully developed. However, in the TLC-SERS method, metal nanoparticles serving as the SERS-active substrate are likely to disturb the detection of target compounds, particularly in overlapping compounds after TLC development. In addition, the SERS detection of compounds that are invisible under both visible light and UV 254/365 after TLC development is still a significant challenge. In this study, we demonstrated a facile strategy to fabricate a TLC plate with metal-organic framework-modified gold nanoparticles as a separable SERS substrate, on which all separated components, including overlapping and invisible compounds, could be detected by a point-by-point SERS scan along the developing direction. Rhodamine 6G (R6G) was used as a probe to evaluate the performance of the substrate. The results indicated that the substrate provided good sensitivity and reproducibility, and optimal SERS signals could be collected in 5 s. Furthermore, this new substrate exhibited a long shelf life. Thus, our method has great potential for the sensitive and rapid detection of overlapping and invisible compounds in complex samples after TLC development. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
A new mosaic method for three-dimensional surface
NASA Astrophysics Data System (ADS)
Yuan, Yun; Zhu, Zhaokun; Ding, Yongjun
2011-08-01
Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random sample and the data normalization added to the absolute orientation. This method is applicable for high precision measurement of three-dimensional surface and also for the 3-D terrain mosaic.
Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings
NASA Astrophysics Data System (ADS)
Hodgkinson, P.; Holmes, K. J.; Hore, P. J.
Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Edith E., E-mail: ed.mueller@salk.at; Mayr, Johannes A., E-mail: h.mayr@salk.at; Zimmermann, Franz A., E-mail: f.zimmermann@salk.at
2012-01-20
Highlights: Black-Right-Pointing-Pointer We examined OXPHOS and citrate synthase enzyme activities in HEK293 cells devoid of mtDNA. Black-Right-Pointing-Pointer Enzymes partially encoded by mtDNA show reduced activities. Black-Right-Pointing-Pointer Also the entirely nuclear encoded complex II and citrate synthase exhibit reduced activities. Black-Right-Pointing-Pointer Loss of mtDNA induces a feedback mechanism that downregulates complex II and citrate synthase. -- Abstract: Mitochondrial DNA (mtDNA) depletion syndromes are generally associated with reduced activities of oxidative phosphorylation (OXPHOS) enzymes that contain subunits encoded by mtDNA. Conversely, entirely nuclear encoded mitochondrial enzymes in these syndromes, such as the tricarboxylic acid cycle enzyme citrate synthase (CS) and OXPHOS complexmore » II, usually exhibit normal or compensatory enhanced activities. Here we report that a human cell line devoid of mtDNA (HEK293 {rho}{sup 0} cells) has diminished activities of both complex II and CS. This finding indicates the existence of a feedback mechanism in {rho}{sup 0} cells that downregulates the expression of entirely nuclear encoded components of mitochondrial energy metabolism.« less
Chaos and complexity by design
Roberts, Daniel A.; Yoshida, Beni
2017-04-20
We study the relationship between quantum chaos and pseudorandomness by developing probes of unitary design. A natural probe of randomness is the “frame poten-tial,” which is minimized by unitary k-designs and measures the 2-norm distance between the Haar random unitary ensemble and another ensemble. A natural probe of quantum chaos is out-of-time-order (OTO) four-point correlation functions. We also show that the norm squared of a generalization of out-of-time-order 2k-point correlators is proportional to the kth frame potential, providing a quantitative connection between chaos and pseudorandomness. In addition, we prove that these 2k-point correlators for Pauli operators completely determine the k-foldmore » channel of an ensemble of unitary operators. Finally, we use a counting argument to obtain a lower bound on the quantum circuit complexity in terms of the frame potential. This provides a direct link between chaos, complexity, and randomness.« less
10. Photocopy of photograph (original photograph in possession of the ...
10. Photocopy of photograph (original photograph in possession of the Ralph M. Parsons Company, Los Angeles California). Photography by the United States Air Force, May 4, 1960. VIEW OF SOUTH FACE OF POINT ARGUELLO LAUNCH COMPLEX 1, PAD 1 (SLC-3) FROM TOP OF CONTROL CENTER (BLDG. 763). ATLAS D BOOSTER FOR THE FIRST SAMOS LAUNCH FROM POINT ARGUELLO LAUNCH COMPLEX 1 (SLC-3) ERECT IN THE SERVICE TOWER. - Vandenberg Air Force Base, Space Launch Complex 3, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
NASA Astrophysics Data System (ADS)
McGuire, N. D.; Ewen, R. J.; de Lacy Costello, B.; Garner, C. E.; Probert, C. S. J.; Vaughan, K.; Ratcliffe, N. M.
2014-06-01
Rapid volatile profiling of stool sample headspace was achieved using a combination of short multi-capillary chromatography column (SMCC), highly sensitive heated metal oxide semiconductor sensor and artificial neural network software. For direct analysis of biological samples this prototype offers alternatives to conventional gas chromatography (GC) detectors and electronic nose technology. The performance was compared to an identical instrument incorporating a long single capillary column (LSCC). The ability of the prototypes to separate complex mixtures was assessed using gas standards and homogenized in house ‘standard’ stool samples, with both capable of detecting more than 24 peaks per sample. The elution time was considerably faster with the SMCC resulting in a run time of 10 min compared to 30 min for the LSCC. The diagnostic potential of the prototypes was assessed using 50 C. difficile positive and 50 negative samples. The prototypes demonstrated similar capability of discriminating between positive and negative samples with sensitivity and specificity of 85% and 80% respectively. C. difficile is an important cause of hospital acquired diarrhoea, with significant morbidity and mortality around the world. A device capable of rapidly diagnosing the disease at the point of care would reduce cases, deaths and financial burden.
McGuire, N D; Ewen, R J; de Lacy Costello, B; Garner, C E; Probert, C S J; Vaughan, K.; Ratcliffe, N M
2016-01-01
Rapid volatile profiling of stool sample headspace was achieved using a combination of short multi-capillary chromatography column (SMCC), highly sensitive heated metal oxide semiconductor (MOS) sensor and artificial neural network (ANN) software. For direct analysis of biological samples this prototype offers alternatives to conventional GC detectors and electronic nose technology. The performance was compared to an identical instrument incorporating a long single capillary column (LSCC). The ability of the prototypes to separate complex mixtures was assessed using gas standards and homogenised in house ‘standard’ stool samples, with both capable of detecting more than 24 peaks per sample. The elution time was considerably faster with the SMCC resulting in a run time of 10 minutes compared to 30 minutes for the LSCC. The diagnostic potential of the prototypes was assessed using 50 C. difficile positive and 50 negative samples. The prototypes demonstrated similar capability of discriminating between positive and negative samples with sensitivity and specificity of 85% and 80% respectively. C. difficile is an important cause of hospital acquired diarrhoea, with significant morbidity and mortality around the world. A device capable of rapidly diagnosing the disease at the point of care would reduce cases, deaths and financial burden. PMID:27212803
NASA Astrophysics Data System (ADS)
Liu, Xiaodong
2017-08-01
A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.
An open-population hierarchical distance sampling model
Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,
2015-01-01
Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.
An open-population hierarchical distance sampling model.
Sollmann, Rahel; Gardner, Beth; Chandler, Richard B; Royle, J Andrew; Sillett, T Scott
2015-02-01
Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for Island Scrub-Jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying numbers of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.
Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris
Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey
2005-01-01
Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...
NASA Technical Reports Server (NTRS)
Hada, M.; Saganti, P. B.; Gersey, B.; Wilkins, R.; Cucinotta, F. A.; Wu, H.
2007-01-01
Most of the reported studies of break point distribution on the damaged chromosomes from radiation exposure were carried out with the G-banding technique or determined based on the relative length of the broken chromosomal fragments. However, these techniques lack the accuracy in comparison with the later developed multicolor banding in situ hybridization (mBAND) technique that is generally used for analysis of intrachromosomal aberrations such as inversions. Using mBAND, we studied chromosome aberrations in human epithelial cells exposed in vitro to both low or high dose rate gamma rays in Houston, low dose rate secondary neutrons at Los Alamos National Laboratory and high dose rate 600 MeV/u Fe ions at NASA Space Radiation Laboratory. Detailed analysis of the inversion type revealed that all of the three radiation types induced a low incidence of simple inversions. Half of the inversions observed after neutron or Fe ion exposure, and the majority of inversions in gamma-irradiated samples were accompanied by other types of intrachromosomal aberrations. In addition, neutrons and Fe ions induced a significant fraction of inversions that involved complex rearrangements of both inter- and intrachromosome exchanges. We further compared the distribution of break point on chromosome 3 for the three radiation types. The break points were found to be randomly distributed on chromosome 3 after neutrons or Fe ions exposure, whereas non-random distribution with clustering break points was observed for gamma-rays. The break point distribution may serve as a potential fingerprint of high-LET radiation exposure.
Meshless Geometric Subdivision
2004-10-01
Michelangelo Youthful data set is shown on the right. for p ∈ M and with boundary condition dM (q, q) = 0 is approximated by |∇dΩrP (p, ·)| = F̃ (p), (2...dealing with more complex geometry. We apply our meshless subdivision operator to a base point set of 10088 points generated from the Michelangelo ...acknowledge the permission to use the Michelangelo point sets granted by the Stanford Computer Graphics group. The Isis, 50% decimated and non
Bui, H N; Bogers, J P A M; Cohen, D; Njo, T; Herruer, M H
2016-12-01
We evaluated the performance of the HemoCue WBC DIFF, a point-of-care device for total and differential white cell count, primarily to test its suitability for the mandatory white blood cell monitoring in clozapine use. Leukocyte count and 5-part differentiation was performed by the point-of-care device and by routine laboratory method in venous EDTA-blood samples from 20 clozapine users, 20 neutropenic patients, and 20 healthy volunteers. From the volunteers, also a capillary sample was drawn. Intra-assay reproducibility and drop-to-drop variation were tested. The correlation between both methods in venous samples was r > 0.95 for leukocyte, neutrophil, and lymphocyte counts. The correlation between point-of-care (capillary sample) and routine (venous sample) methods for these cells was 0.772; 0.817 and 0.798, respectively. Only for leukocyte and neutrophil counts, the intra-assay reproducibility was sufficient. The point-of-care device can be used to screen for leukocyte and neutrophil counts. Because of the relatively high measurement uncertainty and poor correlation with venous samples, we recommend to repeat the measurement with a venous sample if cell counts are in the lower reference range. In case of clozapine therapy, neutropenia can probably be excluded if high neutrophil counts are found and patients can continue their therapy. © 2016 John Wiley & Sons Ltd.
Mapping of Bird Distributions from Point Count Surveys
John R. Sauer; Grey W. Pendleton; Sandra Orsillo
1995-01-01
Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes...
Band warping, band non-parabolicity, and Dirac points in electronic and lattice structures
NASA Astrophysics Data System (ADS)
Resca, Lorenzo; Mecholsky, Nicholas A.; Pegg, Ian L.
2017-10-01
We illustrate at a fundamental level the physical and mathematical origins of band warping and band non-parabolicity in electronic and vibrational structures. We point out a robust presence of pairs of topologically induced Dirac points in a primitive-rectangular lattice using a p-type tight-binding approximation. We analyze two-dimensional primitive-rectangular and square Bravais lattices with implications that are expected to generalize to more complex structures. Band warping is shown to arise at the onset of a singular transition to a crystal lattice with a larger symmetry group, which allows the possibility of irreducible representations of higher dimensions, hence band degeneracy, at special symmetry points in reciprocal space. Band warping is incompatible with a multi-dimensional Taylor series expansion, whereas band non-parabolicities are associated with multi-dimensional Taylor series expansions to all orders. Still band non-parabolicities may merge into band warping at the onset of a larger symmetry group. Remarkably, while still maintaining a clear connection with that merging, band non-parabolicities may produce pairs of conical intersections at relatively low-symmetry points. Apparently, such conical intersections are robustly maintained by global topology requirements, rather than any local symmetry protection. For two p-type tight-binding bands, we find such pairs of conical intersections drifting along the edges of restricted Brillouin zones of primitive-rectangular Bravais lattices as lattice constants vary relatively to each other, until these conical intersections merge into degenerate warped bands at high-symmetry points at the onset of a square lattice. The conical intersections that we found appear to have similar topological characteristics as Dirac points extensively studied in graphene and other topological insulators, even though our conical intersections have none of the symmetry complexity and protection afforded by the latter more complex structures.
Lakatos, Gabriella; Gácsi, Márta; Topál, József; Miklósi, Adám
2012-03-01
The aim of the present investigation was to study the visual communication between humans and dogs in relatively complex situations. In the present research, we have modelled more lifelike situations in contrast to previous studies which often relied on using only two potential hiding locations and direct association between the communicative signal and the signalled object. In Study 1, we have provided the dogs with four potential hiding locations, two on each side of the experimenter to see whether dogs are able to choose the correct location based on the pointing gesture. In Study 2, dogs had to rely on a sequence of pointing gestures displayed by two different experimenters. We have investigated whether dogs are able to recognise an 'indirect signal', that is, a pointing toward a pointer. In Study 3, we have examined whether dogs can understand indirect information about a hidden object and direct the owner to the particular location. Study 1 has revealed that dogs are unlikely to rely on extrapolating precise linear vectors along the pointing arm when relying on human pointing gestures. Instead, they rely on a simple rule of following the side of the human gesturing. If there were more targets on the same side of the human, they showed a preference for the targets closer to the human. Study 2 has shown that dogs are able to rely on indirect pointing gestures but the individual performances suggest that this skill may be restricted to a certain level of complexity. In Study 3, we have found that dogs are able to localise the hidden object by utilising indirect human signals, and they are able to convey this information to their owner.
Assessment of Response Surface Models using Independent Confirmation Point Analysis
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2010-01-01
This paper highlights various advantages that confirmation-point residuals have over conventional model design-point residuals in assessing the adequacy of a response surface model fitted by regression techniques to a sample of experimental data. Particular advantages are highlighted for the case of design matrices that may be ill-conditioned for a given sample of data. The impact of both aleatory and epistemological uncertainty in response model adequacy assessments is considered.
Robust non-rigid registration algorithm based on local affine registration
NASA Astrophysics Data System (ADS)
Wu, Liyang; Xiong, Lei; Du, Shaoyi; Bi, Duyan; Fang, Ting; Liu, Kun; Wu, Dongpeng
2018-04-01
Aiming at the problem that the traditional point set non-rigid registration algorithm has low precision and slow convergence speed for complex local deformation data, this paper proposes a robust non-rigid registration algorithm based on local affine registration. The algorithm uses a hierarchical iterative method to complete the point set non-rigid registration from coarse to fine. In each iteration, the sub data point sets and sub model point sets are divided and the shape control points of each sub point set are updated. Then we use the control point guided affine ICP algorithm to solve the local affine transformation between the corresponding sub point sets. Next, the local affine transformation obtained by the previous step is used to update the sub data point sets and their shape control point sets. When the algorithm reaches the maximum iteration layer K, the loop ends and outputs the updated sub data point sets. Experimental results demonstrate that the accuracy and convergence of our algorithm are greatly improved compared with the traditional point set non-rigid registration algorithms.
Belu, A; Schnitker, J; Bertazzo, S; Neumann, E; Mayer, D; Offenhäusser, A; Santoro, F
2016-07-01
The preparation of biological cells for either scanning or transmission electron microscopy requires a complex process of fixation, dehydration and drying. Critical point drying is commonly used for samples investigated with a scanning electron beam, whereas resin-infiltration is typically used for transmission electron microscopy. Critical point drying may cause cracks at the cellular surface and a sponge-like morphology of nondistinguishable intracellular compartments. Resin-infiltrated biological samples result in a solid block of resin, which can be further processed by mechanical sectioning, however that does not allow a top view examination of small cell-cell and cell-surface contacts. Here, we propose a method for removing resin excess on biological samples before effective polymerization. In this way the cells result to be embedded in an ultra-thin layer of epoxy resin. This novel method highlights in contrast to standard methods the imaging of individual cells not only on nanostructured planar surfaces but also on topologically challenging substrates with high aspect ratio three-dimensional features by scanning electron microscopy. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Schwinger pair production by electric field coupled to inflaton
NASA Astrophysics Data System (ADS)
Geng, Jia-Jia; Li, Bao-Fei; Soda, Jiro; Wang, Anzhong; Wu, Qiang; Zhu, Tao
2018-02-01
We analytically investigate the Schwinger pair production in the de Sitter background by using the uniform asymptotic approximation method, and show that the equation of motion in general has two turning points, and the nature of these points could be single, double, real or complex, depending on the choice of the free parameters involved in the theory. Different natures of these points lead to different electric currents. In particular, when β ≡ m2/H2‑9/4 is positive, both turning points are complex, and the electric current due to the Schwinger process is highly suppressed, where m and H denote, respectively, the mass of the particle and the Hubble parameter. For the turning points to be real, it is necessary to have β < 0, and the more negative of β, the easier to produce particles. In addition, when β < 0, we also study the particle production when the electric field E is very weak. We find that the electric current in this case is proportional to E1/2 ‑ √|β|, which is strongly enhanced in the weak electric field limit when m < √2 H.
Yu, P.; Sun, J.; Wolz, R.; Stephenson, D.; Brewer, J.; Fox, N.C.; Cole, P.E.; Jack, C.R.; Hill, D.L.G.; Schwarz, A.J.
2014-01-01
Objective To evaluate the effect of computational algorithm, measurement variability and cut-point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). Methods We used normal control and amnestic MCI subjects from ADNI-1 as normative reference and screening cohorts. We evaluated the enrichment performance of four widely-used hippocampal segmentation algorithms (FreeSurfer, HMAPS, LEAP and NeuroQuant) in terms of two-year changes in MMSE, ADAS-Cog and CDR-SB. We modeled the effect of algorithm, test-retest variability and cut-point on sample size, screen fail rates and trial cost and duration. Results HCV-based patient selection yielded not only reduced sample sizes (by ~40–60%) but also lower trial costs (by ~30–40%) across a wide range of cut-points. Overall, the dependence on the cut-point value was similar for the three clinical instruments considered. Conclusion These results provide a guide to the choice of HCV cut-point for aMCI clinical trials, allowing an informed trade-off between statistical and practical considerations. PMID:24211008
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
Efficient terrestrial laser scan segmentation exploiting data structure
NASA Astrophysics Data System (ADS)
Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa
2016-09-01
New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.
Spectrum of classes of point emitters of electromagnetic wave fields.
Castañeda, Román
2016-09-01
The spectrum of classes of point emitters has been introduced as a numerical tool suitable for the design, analysis, and synthesis of non-paraxial optical fields in arbitrary states of spatial coherence. In this paper, the polarization state of planar electromagnetic wave fields is included in the spectrum of classes, thus increasing its modeling capabilities. In this context, optical processing is realized as a filtering on the spectrum of classes of point emitters, performed by the complex degree of spatial coherence and the two-point correlation of polarization, which could be implemented dynamically by using programmable optical devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cornwell, Paris A; Bunn, Jeffrey R; Schmidlin, Joshua E
The December 2010 version of the guide, ORNL/TM-2008/159, by Jeff Bunn, Josh Schmidlin, Camden Hubbard, and Paris Cornwell, has been further revised due to a major change in the GeoMagic Studio software for constructing a surface model. The Studio software update also includes a plug-in module to operate the FARO Scan Arm. Other revisions for clarity were also made. The purpose of this revision document is to guide the reader through the process of laser alignment used by NRSF2 at HFIR and VULCAN at SNS. This system was created to increase the spatial accuracy of the measurement points in amore » sample, reduce the use of neutron time used for alignment, improve experiment planning, and reduce operator error. The need for spatial resolution has been driven by the reduction in gauge volumes to the sub-millimeter level, steep strain gradients in some samples, and requests to mount multiple samples within a few days for relating data from each sample to a common sample coordinate system. The first step in this process involves mounting the sample on an indexer table in a laboratory set up for offline sample mounting and alignment in the same manner it would be mounted at either instrument. In the shared laboratory, a FARO ScanArm is used to measure the coordinates of points on the sample surface ('point cloud'), specific features and fiducial points. A Sample Coordinate System (SCS) needs to be established first. This is an advantage of the technique because the SCS can be defined in such a way to facilitate simple definition of measurement points within the sample. Next, samples are typically mounted to a frame of 80/20 and fiducial points are attached to the sample or frame then measured in the established sample coordinate system. The laser scan probe on the ScanArm can then be used to scan in an 'as-is' model of the sample as well as mounting hardware. GeoMagic Studio 12 is the software package used to construct the model from the point cloud the scan arm creates. Once a model, fiducial, and measurement files are created, a special program, called SScanSS combines the information and by simulation of the sample on the diffractometer can help plan the experiment before using neutron time. Finally, the sample is mounted on the relevant stress measurement instrument and the fiducial points are measured again. In the HFIR beam room, a laser tracker is used in conjunction with a program called CAM2 to measure the fiducial points in the NRSF2 instrument's sample positioner coordinate system. SScanSS is then used again to perform a coordinate system transformation of the measurement file locations to the sample positioner coordinate system. A procedure file is then written with the coordinates in the sample positioner coordinate system for the desired measurement locations. This file is often called a script or command file and can be further modified using excel. It is very important to note that this process is not a linear one, but rather, it often is iterative. Many of the steps in this guide are interdependent on one another. It is very important to discuss the process as it pertains to the specific sample being measured. What works with one sample may not necessarily work for another. This guide attempts to provide a typical work flow that has been successful in most cases.« less
Increasing point-count duration increases standard error
Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.
1998-01-01
We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.
A shape-based segmentation method for mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen
2013-07-01
Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.
Bailey, Beth A
2013-10-01
Measurement of carbon monoxide in expired air samples (ECO) is a non-invasive, cost-effective biochemical marker for smoking. Cut points of 6ppm-10ppm have been established, though appropriate cut-points for pregnant woman have been debated due to metabolic changes. This study assessed whether an ECO cut-point identifying at least 90% of pregnant smokers, and misidentifying fewer than 10% of non-smokers, could be established. Pregnant women (N=167) completed a validated self-report smoking assessment, a urine drug screen for cotinine (UDS), and provided an expired air sample twice during pregnancy. Half of women reported non-smoking status early (51%) and late (53%) in pregnancy, confirmed by UDS. Using a traditional 8ppm+cut-point for the early pregnancy reading, only 1% of non-smokers were incorrectly identified as smokers, but only 56% of all smokers, and 67% who smoked 5+ cigarettes in the previous 24h, were identified. However, at 4ppm+, only 8% of non-smokers were misclassified as smokers, and 90% of all smokers and 96% who smoked 5+ cigarettes in the previous 24h were identified. False positives were explained by heavy second hand smoke exposure and marijuana use. Results were similar for late pregnancy ECO, with ROC analysis revealing an area under the curve of .95 for early pregnancy, and .94 for late pregnancy readings. A lower 4ppm ECO cut-point may be necessary to identify pregnant smokers using expired air samples, and this cut-point appears valid throughout pregnancy. Work is ongoing to validate findings in larger samples, but it appears if an appropriate cut-point is used, ECO is a valid method for determining smoking status in pregnancy. Copyright © 2013 Elsevier Ltd. All rights reserved.
Novel method of realizing metal freezing points by induced solidification
NASA Astrophysics Data System (ADS)
Ma, C. K.
1997-07-01
The freezing point of a pure metal, tf, is the temperature at which the solid and liquid phases are in equilibrium. The purest metal available is actually a dilute alloy. Normally, the liquidus point of a sample, tl, at which the amount of the solid phase in equilibrium with the liquid phase is minute, provides the closest approximation to tf. Thus the experimental realization of tf is a matter of realizing tl. The common method is to cool a molten sample continuously so that it supercools and recalesces. The highest temperature after recalescence is normally the best experimental value of tl. In the realization, supercooling of the sample at the sample container and the thermometer well is desirable for the formation of dual solid-liquid interfaces to thermally isolate the sample and the thermometer. However, the subsequent recalescence of the supercooled sample requires the formation of a certain amount of solid, which is not minute. Obviously, the plateau temperature is not the liquidus point. In this article we describe a method that minimizes supercooling. The condition that provides tl is closely approached so that the latter may be measured. As the temperature of the molten sample approaches the anticipated value of tl, a small solid of the same alloy is introduced into the sample to induce solidification. In general, solidification does not occur as long as the temperature is above or at tl, and occurs as soon as the sample supercools minutely. Thus tl can be obtained, in principle, by observing the temperature at which induced solidification begins. In case the solid is introduced after the sample has supercooled slightly, a slight recalescence results and the subsequent maximum temperature is a close approximation to tl. We demonstrate that the principle of induced solidification is indeed applicable to freezing point measurements by applying it to the design of a copper-freezing-point cell for industrial applications, in which a supercooled sample is reheated and then induced to solidify by the solidification of an auxiliary sample. Further experimental studies are necessary to assess the practical advantages and disadvantages of the induction method.
Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests
Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...
How PowerPoint Is Killing Education
ERIC Educational Resources Information Center
Isseks, Marc
2011-01-01
Although it is essential to incorporate new technologies into the classroom, says Isseks, one trend has negatively affected instruction--the misuse of PowerPoint presentations. The author describes how poorly designed PowerPoint presentations reduce complex thoughts to bullet points and reduce the act of learning to transferring text from slide to…
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
NASA Astrophysics Data System (ADS)
Molinario, G.; Hansen, M. C.; Potapov, P. V.; Tyukavina, A.; Stehman, S.; Barker, B.; Humber, M.
2017-10-01
The rural complex is the inhabited agricultural land cover mosaic found along the network of rivers and roads in the forest of the Democratic Republic of Congo. It is a product of traditional small-holder shifting cultivation. To date, thanks to its distinction from primary forest, this area has been mapped as relatively homogenous, leaving the proportions of land cover heterogeneity within it unknown. However, the success of strategies for sustainable development, including land use planning and payment for ecosystem services, such as Reduced Emissions from Deforestation and Degradation, depends on the accurate characterization of the impacts of land use on natural resources, including within the rural complex. We photo-interpreted a simple random sample of 1000 points in the established rural complex, using 3106 high resolution satellite images obtained from the National Geospatial-Intelligence Agency, together with 406 images from Google Earth, spanning the period 2008-2016. Results indicate that nationally the established rural complex includes 5% clearings, 10% active fields, 26% fallows, 34% secondary forest, 2% wetland forest, 11% primary forest, 6% grasslands, 3% roads and settlements and 2% commercial plantations. Only a small proportion of sample points were plantations, while other commercial dynamics, such as logging and mining, were not detected in the sample. The area of current shifting cultivation accounts for 76% of the established rural complex. Added to primary forest (11%), this means that 87% of the rural complex is available for shifting cultivation. At the current clearing rate, it would take ~18 years for a complete rotation of the rural complex to occur. Additional pressure on land results in either the cultivation of non-preferred land types within the rural complex (such as wetland forest), or expansion of agriculture into nearby primary forests, with attendant impacts on emissions, habitat loss and other ecosystems services.
Arain, Salma Aslam; Kazi, Tasneem Gul; Afridi, Hassan Imran; Arain, Mariam Shahzadi; Panhwar, Abdul Haleem; Khan, Naeemullah; Baig, Jameel Ahmed; Shah, Faheem
2016-04-01
A simple and rapid dispersive liquid-liquid microextraction procedure based on ionic liquid assisted microemulsion (IL-µE-DLLME) combined with cloud point extraction has been developed for preconcentration copper (Cu(2+)) in drinking water and serum samples of adolescent female hepatitits C (HCV) patients. In this method a ternary system was developed to form microemulsion (µE) by phase inversion method (PIM), using ionic liquid, 1-butyl-3-methylimidazolium hexafluorophosphate ([C4mim][PF6]) and nonionic surfactant, TX-100 (as a stabilizer in aqueous media). The Ionic liquid microemulsion (IL-µE) was evaluated through visual assessment, optical light microscope and spectrophotometrically. The Cu(2+) in real water and aqueous acid digested serum samples were complexed with 8-hydroxyquinoline (oxine) and extracted into IL-µE medium. The phase separation of stable IL-µE was carried out by the micellar cloud point extraction approach. The influence of of different parameters such as pH, oxine concentration, centrifugation time and rate were investigated. At optimized experimental conditions, the limit of detection and enhancement factor were found to be 0.132 µg/L and 70 respectively, with relative standard deviation <5%. In order to validate the developed method, certified reference materials (SLRS-4 Riverine water) and human serum (Sero-M10181) were analyzed. The resulting data indicated a non-significant difference in obtained and certified values of Cu(2+). The developed procedure was successfully applied for the preconcentration and determination of trace levels of Cu(2+) in environmental and biological samples. Copyright © 2015 Elsevier Inc. All rights reserved.
Cost-Benefit Analysis of Computer Resources for Machine Learning
Champion, Richard A.
2007-01-01
Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.
2. INTERIOR VIEW OF ENTRY CONTROL POINT (BLDG. 768) FROM ...
2. INTERIOR VIEW OF ENTRY CONTROL POINT (BLDG. 768) FROM SOUTHWEST CORNER - Vandenberg Air Force Base, Space Launch Complex 3, Entry Control Point, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
Farnsworth, G.L.; Nichols, J.D.; Sauer, J.R.; Fancy, S.G.; Pollock, K.H.; Shriner, S.A.; Simons, T.R.; Ralph, C. John; Rich, Terrell D.
2005-01-01
Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point counts in favor of more intensive approaches to counting. However, over the past few years a variety of statistical and methodological developments have begun to provide practical ways of overcoming some of the problems with point counts. We describe some of these approaches, and show how they can be integrated into standard point count protocols to greatly enhance the quality of the information. Several tools now exist for estimation of detection probability of birds during counts, including distance sampling, double observer methods, time-depletion (removal) methods, and hybrid methods that combine these approaches. Many counts are conducted in habitats that make auditory detection of birds much more likely than visual detection. As a framework for understanding detection probability during such counts, we propose separating two components of the probability a bird is detected during a count into (1) the probability a bird vocalizes during the count and (2) the probability this vocalization is detected by an observer. In addition, we propose that some measure of the area sampled during a count is necessary for valid inferences about bird populations. This can be done by employing fixed-radius counts or more sophisticated distance-sampling models. We recommend any studies employing point counts be designed to estimate detection probability and to include a measure of the area sampled.
Khan, Sumaira; Kazi, Tasneem G; Baig, Jameel A; Kolachi, Nida F; Afridi, Hassan I; Wadhwa, Sham Kumar; Shah, Abdul Q; Kandhro, Ghulam A; Shah, Faheem
2010-10-15
A cloud point extraction (CPE) method has been developed for the determination of trace quantity of vanadium ions in pharmaceutical formulations (PF), dialysate (DS) and parenteral solutions (PS). The CPE of vanadium (V) using 8-hydroxyquinoline (oxine) as complexing reagent and mediated by nonionic surfactant (Triton X-114) was investigated. The parameters that affect the extraction efficiency of CPE, such as pH of sample solution, concentration of oxine and Triton X-114, equilibration temperature and time period for shaking were investigated in detail. The validity of CPE of V was checked by standard addition method in real samples. The extracted surfactant-rich phase was diluted with nitric acid in ethanol, prior to subjecting electrothermal atomic absorption spectrometry. Under these conditions, the preconcentration of 50 mL sample solutions, allowed raising an enrichment factor of 125-fold. The lower limit of detection obtained under the optimal conditions was 42 ng/L. The proposed method has been successfully applied to the determination of trace quantity of V in various pharmaceutical preparations with satisfactory results. The concentration ranges of V in PF, DS and PS samples were found in the range of 10.5-15.2, 0.65-1.32 and 1.76-6.93 microg/L, respectively. 2010 Elsevier B.V. All rights reserved.
Method for visualization and presentation of priceless old prints based on precise 3D scan
NASA Astrophysics Data System (ADS)
Bunsch, Eryk; Sitnik, Robert
2014-02-01
Graphic prints and manuscripts constitute main part of the cultural heritage objects created by the most of the known civilizations. Their presentation was always a problem due to their high sensitivity to light and changes of external conditions (temperature, humidity). Today it is possible to use an advanced digitalization techniques for documentation and visualization of mentioned objects. In the situation when presentation of the original heritage object is impossible, there is a need to develop a method allowing documentation and then presentation to the audience of all the aesthetical features of the object. During the course of the project scans of several pages of one of the most valuable books in collection of Museum of Warsaw Archdiocese were performed. The book known as "Great Dürer Trilogy" consists of three series of woodcuts by the Albrecht Dürer. The measurement system used consists of a custom designed, structured light-based, high-resolution measurement head with automated digitization system mounted on the industrial robot. This device was custom built to meet conservators' requirements, especially the lack of ultraviolet or infrared radiation emission in the direction of measured object. Documentation of one page from the book requires about 380 directional measurements which constitute about 3 billion sample points. The distance between the points in the cloud is 20 μm. Provided that the measurement with MSD (measurement sampling density) of 2500 points makes it possible to show to the publicity the spatial structure of this graphics print. An important aspect is the complexity of the software environment created for data processing, in which massive data sets can be automatically processed and visualized. Very important advantage of the software which is using directly clouds of points is the possibility to manipulate freely virtual light source.
Expected antenna utilization and overload
NASA Technical Reports Server (NTRS)
Posner, Edward C.
1991-01-01
The trade-offs between the number of antennas at Deep Space Network (DSN) Deep-Space Communications Complex and the fraction of continuous coverage provided to a set of hypothetical spacecraft, assuming random placement of the space craft passes during the day. The trade-offs are fairly robust with respect to the randomness assumption. A sample result is that a three-antenna complex provides an average of 82.6 percent utilization of facilities and coverage of nine spacecraft that each have 8-hour passes, whereas perfect phasing of the passes would yield 100 percent utilization and coverage. One key point is that sometimes fewer than three spacecraft are visible, so an antenna is idle, while at other times, there aren't enough antennas, and some spacecraft do without service. This point of view may be useful in helping to size the network or to develop a normalization for a figure of merit of DSN coverage.
SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry
NASA Astrophysics Data System (ADS)
Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.
2007-03-01
A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.
Computed Potential Energy Surfaces and Minimum Energy Pathways for Chemical Reactions
NASA Technical Reports Server (NTRS)
Walch, Stephen P.; Langhoff, S. R. (Technical Monitor)
1994-01-01
Computed potential energy surfaces are often required for computation of such parameters as rate constants as a function of temperature, product branching ratios, and other detailed properties. For some dynamics methods, global potential energy surfaces are required. In this case, it is necessary to obtain the energy at a complete sampling of all the possible arrangements of the nuclei, which are energetically accessible, and then a fitting function must be obtained to interpolate between the computed points. In other cases, characterization of the stationary points and the reaction pathway connecting them is sufficient. These properties may be readily obtained using analytical derivative methods. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method to obtain accurate energetics, gives usefull results for a number of chemically important systems. The talk will focus on a number of applications including global potential energy surfaces, H + O2, H + N2, O(3p) + H2, and reaction pathways for complex reactions, including reactions leading to NO and soot formation in hydrocarbon combustion.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Understanding Long-Term Variations in an Elephant Piosphere Effect to Manage Impacts
Landman, Marietjie; Schoeman, David S.; Hall-Martin, Anthony J.; Kerley, Graham I. H.
2012-01-01
Surface water availability is a key driver of elephant impacts on biological diversity. Thus, understanding the spatio-temporal variations of these impacts in relation to water is critical to their management. However, elephant piosphere effects (i.e. the radial pattern of attenuating impact) are poorly described, with few long-term quantitative studies. Our understanding is further confounded by the complexity of systems with elephant (i.e. fenced, multiple water points, seasonal water availability, varying population densities) that likely limit the use of conceptual models to predict these impacts. Using 31 years of data on shrub structure in the succulent thickets of the Addo Elephant National Park, South Africa, we tested elephant effects at a single water point. Shrub structure showed a clear sigmoid response with distance from water, declining at both the upper and lower limits of sampling. Adjacent to water, this decline caused a roughly 300-m radial expansion of the grass-dominated habitats that replace shrub communities. Despite the clear relationship between shrub structure and ecological functioning in thicket, the extent of elephant effects varied between these features with distance from water. Moreover, these patterns co-varied with other confounding variables (e.g. the location of neighboring water points), which limits our ability to predict such effects in the absence of long-term data. We predict that elephant have the ability to cause severe transformation in succulent thicket habitats with abundant water supply and elevated elephant numbers. However, these piosphere effects are complex, suggesting that a more integrated understanding of elephant impacts on ecological heterogeneity may be required before water availability is used as a tool to manage impacts. We caution against the establishment of water points in novel succulent thicket habitats, and advocate a significant reduction in water provisioning at our study site, albeit with greater impacts at each water point. PMID:23028942
Understanding long-term variations in an elephant piosphere effect to manage impacts.
Landman, Marietjie; Schoeman, David S; Hall-Martin, Anthony J; Kerley, Graham I H
2012-01-01
Surface water availability is a key driver of elephant impacts on biological diversity. Thus, understanding the spatio-temporal variations of these impacts in relation to water is critical to their management. However, elephant piosphere effects (i.e. the radial pattern of attenuating impact) are poorly described, with few long-term quantitative studies. Our understanding is further confounded by the complexity of systems with elephant (i.e. fenced, multiple water points, seasonal water availability, varying population densities) that likely limit the use of conceptual models to predict these impacts. Using 31 years of data on shrub structure in the succulent thickets of the Addo Elephant National Park, South Africa, we tested elephant effects at a single water point. Shrub structure showed a clear sigmoid response with distance from water, declining at both the upper and lower limits of sampling. Adjacent to water, this decline caused a roughly 300-m radial expansion of the grass-dominated habitats that replace shrub communities. Despite the clear relationship between shrub structure and ecological functioning in thicket, the extent of elephant effects varied between these features with distance from water. Moreover, these patterns co-varied with other confounding variables (e.g. the location of neighboring water points), which limits our ability to predict such effects in the absence of long-term data. We predict that elephant have the ability to cause severe transformation in succulent thicket habitats with abundant water supply and elevated elephant numbers. However, these piosphere effects are complex, suggesting that a more integrated understanding of elephant impacts on ecological heterogeneity may be required before water availability is used as a tool to manage impacts. We caution against the establishment of water points in novel succulent thicket habitats, and advocate a significant reduction in water provisioning at our study site, albeit with greater impacts at each water point.
NASA Astrophysics Data System (ADS)
Serafini, John; Hossain, A.; James, R. B.; Guziewicz, M.; Kruszka, R.; Słysz, W.; Kochanowska, D.; Domagala, J. Z.; Mycielski, A.; Sobolewski, Roman
2017-07-01
We present our studies on both photoconductive (PC) and electro-optic (EO) responses of (Cd,Mg)Te single crystals. In an In-doped Cd0.92Mg0.08Te single crystal, subpicosecond electrical pulses were optically generated via a PC effect, coupled into a transmission line, and, subsequently, detected using an internal EO sampling scheme, all in the same (Cd,Mg)Te material. For photo-excitation and EO sampling, we used femtosecond optical pulses generated by the same Ti:sapphire laser with the wavelengths of 410 and 820 nm, respectively. The shortest transmission line distance between the optical excitation and EO sampling points was 75 μm. By measuring the transient waveforms at different distances from the excitation point, we calculated the transmission-line complex propagation factor, as well as the THz frequency attenuation factor and the propagation velocity, all of which allowed us to reconstruct the electromagnetic transient generated directly at the excitation point, showing that the original PC transient was subpicosecond in duration with a fall time of ˜500 fs. Finally, the measured EO retardation, together with the amount of the electric-field penetration, allowed us to determine the magnitude of the internal EO effect in our (Cd,Mg)Te crystal. The obtained THz-frequency EO coefficient was equal to 0.4 pm/V, which is at the lower end among the values reported for CdTe-based ternaries, apparently, due to the disorientation of the tested crystal that resulted in the non-optimal EO measurement condition.
Serafini, John; Hossain, A.; James, R. B.; ...
2017-07-03
We present our studies on both photoconductive (PC) and electro-optic (EO) responses of (Cd,Mg)Te single crystals. In an In-doped Cd 0.92Mg 0.08Te single crystal, subpicosecond electrical pulses were optically generated via a PC effect, coupled into a transmission line, and, subsequently, detected using an internal EO sampling scheme, all in the same (Cd,Mg)Te material. For photo-excitation and EO sampling, we used femtosecond optical pulses generated by the same Ti:sapphire laser with the wavelength 410 and 820 nm, respectively. The shortest transmission line distance between the optical excitation and EO sampling points was 75 μm. By measuring the transient waveforms atmore » different distances from the excitation point, we calculated the transmission-line complex propagation factor, as well as the THz frequency attenuation factor and the propagation velocity, all of which allowed us to reconstruct the electromagnetic transient generated directly at the excitation point, showing that the original PC transient was subpicosecond in duration with a fall time of ~500 fs. Finally, the measured EO retardation, together with the amount of the electric-field penetration, allowed us to determine the magnitude of the internal EO effect in our (Cd,Mg)Te crystal. The obtained THz-frequency EO coefficient was equal to 0.4 pm/V, which is at the lower end among the values reported for CdTe-based ternaries, due to a twinned structure and misalignment of the tested (Cd,Mg)Te crystal.« less
MacDonald, Morgan C; Juran, Luke; Jose, Jincy; Srinivasan, Sekar; Ali, Syed I; Aronson, Kristan J; Hall, Kevin
2016-01-01
Point-of-use water treatment has received widespread application in the developing world to help mitigate waterborne infectious disease. This study examines the efficacy of a combined filter and chemical disinfection technology in removing bacterial contaminants, and more specifically changes in its performance resulting from seasonal weather variability. During a 12-month field trial in Chennai, India, mean log-reductions were 1.51 for E. coli and 1.67 for total coliforms, and the highest concentration of indicator bacteria in treated water samples were found during the monsoon season. Analysis of variance revealed significant differences in the microbial load of indicator organisms (coliforms and E. coli) between seasons, storage time since treatment (TST), and samples with and without chlorine residuals. Findings indicate that the bacteriological quality of drinking water treated in the home is determined by a complex interaction of environmental and sociological conditions. Moreover, while the effect of disinfection was independent of season, the impact of storage TST on water quality was found to be seasonally dependent.
Comparison of efficacy of pulverization and sterile paper point techniques for sampling root canals.
Tran, Kenny T; Torabinejad, Mahmoud; Shabahang, Shahrokh; Retamozo, Bonnie; Aprecio, Raydolfo M; Chen, Jung-Wei
2013-08-01
The purpose of this study was to compare the efficacy of the pulverization and sterile paper point techniques for sampling root canals using 5.25% NaOCl/17% EDTA and 1.3% NaOCl/MTAD (Dentsply, Tulsa, OK) as irrigation regimens. Single-canal extracted human teeth were decoronated and infected with Enterococcus faecalis. Roots were randomly assigned to 2 irrigation regimens: group A with 5.25% NaOCl/17% EDTA (n = 30) and group B with 1.3% NaOCl/MTAD (n = 30). After chemomechanical debridement, bacterial samplings were taken using sterile paper points and pulverized powder of the apical 5 mm root ends. The sterile paper point technique did not show growth in any samples. The pulverization technique showed growth in 24 of the 60 samples. The Fisher exact test showed significant differences between sampling techniques (P < .001). The sterile paper point technique showed no difference between irrigation regimens. However, 17 of the 30 roots in group A and 7 of the 30 roots in group B resulted in growth as detected by pulverization technique. Data showed a significant difference between irrigation regimens (P = .03) in pulverization technique. The pulverization technique was more efficacious in detecting viable bacteria. Furthermore, this technique showed that 1.3% NaOCl/MTAD regimen was more effective in disinfecting root canals. Published by Elsevier Inc.
Matsuoka, Takeshi; Tanaka, Shigenori; Ebina, Kuniyoshi
2015-09-07
Photosystem II (PS II) is a protein complex which evolves oxygen and drives charge separation for photosynthesis employing electron and excitation-energy transfer processes over a wide timescale range from picoseconds to milliseconds. While the fluorescence emitted by the antenna pigments of this complex is known as an important indicator of the activity of photosynthesis, its interpretation was difficult because of the complexity of PS II. In this study, an extensive kinetic model which describes the complex and multi-timescale characteristics of PS II is analyzed through the use of the hierarchical coarse-graining method proposed in the authors׳ earlier work. In this coarse-grained analysis, the reaction center (RC) is described by two states, open and closed RCs, both of which consist of oxidized and neutral special pairs being in quasi-equilibrium states. Besides, the PS II model at millisecond scale with three-state RC, which was studied previously, could be derived by suitably adjusting the kinetic parameters of electron transfer between tyrosine and RC. Our novel coarse-grained model of PS II can appropriately explain the light-intensity dependent change of the characteristic patterns of fluorescence induction kinetics from O-J-I-P, which shows two inflection points, J and I, between initial point O and peak point P, to O-J-D-I-P, which shows a dip D between J and I inflection points. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nemoto, Mitsutaka; Nomura, Yukihiro; Hanaoka, Shohei; Masutani, Yoshitaka; Yoshikawa, Takeharu; Hayashi, Naoto; Yoshioka, Naoki; Ohtomo, Kuni
Anatomical point landmarks as most primitive anatomical knowledge are useful for medical image understanding. In this study, we propose a detection method for anatomical point landmark based on appearance models, which include gray-level statistical variations at point landmarks and their surrounding area. The models are built based on results of Principal Component Analysis (PCA) of sample data sets. In addition, we employed generative learning method by transforming ROI of sample data. In this study, we evaluated our method with 24 data sets of body trunk CT images and obtained 95.8 ± 7.3 % of the average sensitivity in 28 landmarks.
Gürkan, Ramazan; Korkmaz, Sema; Altunay, Nail
2016-08-01
A new ultrasonic-thermostatic-assisted cloud point extraction procedure (UTA-CPE) was developed for preconcentration at the trace levels of vanadium (V) and molybdenum (Mo) in milk, vegetables and foodstuffs prior to determination via flame atomic absorption spectrometry (FAAS). The method is based on the ion-association of stable anionic oxalate complexes of V(V) and Mo(VI) with [9-(diethylamino)benzo[a]phenoxazin-5-ylidene]azanium; sulfate (Nile blue A) at pH 4.5, and then extraction of the formed ion-association complexes into micellar phase of polyoxyethylene(7.5)nonylphenyl ether (PONPE 7.5). The UTA-CPE is greatly simplified and accelerated compared to traditional cloud point extraction (CPE). The analytical parameters optimized are solution pH, the concentrations of complexing reagents (oxalate and Nile blue A), the PONPE 7.5 concentration, electrolyte concentration, sample volume, temperature and ultrasonic power. Under the optimum conditions, the calibration curves for Mo(VI) and V(V) are obtained in the concentration range of 3-340µgL(-1) and 5-250µgL(-1) with high sensitivity enhancement factors (EFs) of 145 and 115, respectively. The limits of detection (LODs) for Mo(VI) and V(V) are 0.86 and 1.55µgL(-1), respectively. The proposed method demonstrated good performances such as relative standard deviations (as RSD %) (≤3.5%) and spiked recoveries (95.7-102.3%). The accuracy of the method was assessed by analysis of two standard reference materials (SRMs) and recoveries of spiked solutions. The method was successfully applied into the determination of trace amounts of Mo(VI) and V(V) in milk, vegetables and foodstuffs with satisfactory results. Copyright © 2016 Elsevier B.V. All rights reserved.
Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-01-01
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978
Designing efficient surveys: spatial arrangement of sample points for detection of invasive species
Ludek Berec; John M. Kean; Rebecca Epanchin-Niell; Andrew M. Liebhold; Robert G. Haight
2015-01-01
Effective surveillance is critical to managing biological invasions via early detection and eradication. The efficiency of surveillance systems may be affected by the spatial arrangement of sample locations. We investigate how the spatial arrangement of sample points, ranging from random to fixed grid arrangements, affects the probability of detecting a target...
Vinther, Kristina H; Tveskov, Claus; Möller, Sören; Auscher, Soren; Osmanagic, Armin; Egstrup, Kenneth
2017-06-01
Our aim was to investigate the association of premature atrial complexes and the risk of recurrent stroke or death in patients with ischemic stroke in sinus rhythm. In a prospective cohort study, we used 24-hour Holter recordings to evaluate premature atrial complexes in patients consecutively admitted with ischemic strokes. Excessive premature atrial complexes were defined as >14 premature atrial complexes per hour and 3 or more runs of premature atrial complexes per 24 hours. During follow-up, 48-hour Holter recordings were performed after 6 and 12 months. Among patients in sinus rhythm, the association of excessive premature atrial complexes and the primary end point of recurrent stroke or death were estimated in both crude and adjusted Cox proportional hazards models. We further evaluated excessive premature atrial complexes contra atrial fibrillation in relation to the primary end point. Of the 256 patients included, 89 had atrial fibrillation. Of the patients in sinus rhythm (n = 167), 31 had excessive premature atrial complexes. During a median follow-up of 32 months, 50 patients (30% of patients in sinus rhythm) had recurrent strokes (n = 20) or died (n = 30). In both crude and adjusted models, excessive premature atrial complexes were associated with the primary end point, but not with newly diagnosed atrial fibrillation. Compared with patients in atrial fibrillation, those with excessive premature atrial complexes had similarly high risks of the primary end point. In patients with ischemic stroke and sinus rhythm, excessive premature atrial complexes were associated with a higher risk of recurrent stroke or death. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Carbone, Teresa; Gilio, Michele; Padula, Maria Carmela; Tramontano, Giuseppina; D'Angelo, Salvatore; Pafundi, Vito
2018-05-01
Indirect Immunofluorescence (IIF) is widely considered the Gold Standard for Antinuclear Antibody (ANA) screening. However, the high inter-reader variability remains the major disadvantage associated with ANA testing and the main reason for the increasing demand of the computer-aided immunofluorescence microscope. Previous studies proposed the quantification of the fluorescence intensity as an alternative for the classical end-point titer evaluation. However, the different distribution of bright/dark light linked to the nature of the self-antigen and its location in the cells result in different mean fluorescence intensities. The aim of the present study was to correlate Fluorescence Index (F.I.) with end-point titers for each well-defined ANA pattern. Routine serum samples were screened for ANA testing on HEp-2000 cells using Immuno Concepts Image Navigator System, and positive samples were serially diluted to assign the end-point titer. A comparison between F.I. and end-point titers related to 10 different staining patterns was made. According to our analysis, good technical performance of F.I. (97% sensitivity and 94% specificity) was found. A significant correlation between quantitative reading of F.I. and end-point titer groups was observed using Spearman's test and regression analysis. A conversion scale of F.I. in end-point titers for each recognized ANA-pattern was obtained. The Image Navigator offers the opportunity to improve worldwide harmonization of ANA test results. In particular, digital F.I. allows quantifying ANA titers by using just one sample dilution. It could represent a valuable support for the routine laboratory and an effective tool to reduce inter- and intra-laboratory variability. Copyright © 2018. Published by Elsevier B.V.
Active control of acoustic field-of-view in a biosonar system.
Yovel, Yossi; Falk, Ben; Moss, Cynthia F; Ulanovsky, Nachum
2011-09-01
Active-sensing systems abound in nature, but little is known about systematic strategies that are used by these systems to scan the environment. Here, we addressed this question by studying echolocating bats, animals that have the ability to point their biosonar beam to a confined region of space. We trained Egyptian fruit bats to land on a target, under conditions of varying levels of environmental complexity, and measured their echolocation and flight behavior. The bats modulated the intensity of their biosonar emissions, and the spatial region they sampled, in a task-dependant manner. We report here that Egyptian fruit bats selectively change the emission intensity and the angle between the beam axes of sequentially emitted clicks, according to the distance to the target, and depending on the level of environmental complexity. In so doing, they effectively adjusted the spatial sector sampled by a pair of clicks-the "field-of-view." We suggest that the exact point within the beam that is directed towards an object (e.g., the beam's peak, maximal slope, etc.) is influenced by three competing task demands: detection, localization, and angular scanning-where the third factor is modulated by field-of-view. Our results suggest that lingual echolocation (based on tongue clicks) is in fact much more sophisticated than previously believed. They also reveal a new parameter under active control in animal sonar-the angle between consecutive beams. Our findings suggest that acoustic scanning of space by mammals is highly flexible and modulated much more selectively than previously recognized.
2013-07-01
drugs at potentially very low concentrations in a variety of complex media such as saliva, blood, urine , and other bodily fluids. These handheld...flow assays, such as pregnancy tests, disease and drug abuse screens, and blood protein markers, exhibit widespread use. Recent parallel advances...gadgets have the potential to lower the cost of diagnosis and save immense amounts of time by removing the need to collect, preserve, and ship samples
High frequency lateral flow affinity assay using superparamagnetic nanoparticles
NASA Astrophysics Data System (ADS)
Lago-Cachón, D.; Rivas, M.; Martínez-García, J. C.; Oliveira-Rodríguez, M.; Blanco-López, M. C.; García, J. A.
2017-02-01
Lateral flow assay is one of the simplest and most extended techniques in medical diagnosis for point-of-care testing. Although it has been traditionally a positive/negative test, some work has been lately done to add quantitative abilities to lateral flow assay. One of the most successful strategies involves magnetic beads and magnetic sensors. Recently, a new technique of superparamagnetic nanoparticle detection has been reported, based on the increase of the impedance induced by the nanoparticles on a RF-current carrying copper conductor. This method requires no external magnetic field, which reduces the system complexity. In this work, nitrocellulose membranes have been installed on the sensor, and impedance measurements have been carried out during the sample diffusion by capillarity along the membrane. The impedance of the sensor changes because of the presence of magnetic nanoparticles. The results prove the potentiality of the method for point-of-care testing of biochemical substances and nanoparticle capillarity flow studies.
Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction
Cruz Zurian, Heber; Atefi, Seyed Reza; Seoane Martinez, Fernando; Lukowicz, Paul
2017-01-01
In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments’ contribution. The best performing feature-classifier combination can recognize the gestures with a 93.3% accuracy from a known group of participants, and 89.1% from strangers. PMID:29120389
Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction.
Zhou, Bo; Altamirano, Carlos Andres Velez; Zurian, Heber Cruz; Atefi, Seyed Reza; Billing, Erik; Martinez, Fernando Seoane; Lukowicz, Paul
2017-11-09
In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm 2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments' contribution. The best performing feature-classifier combination can recognize the gestures with a 93 . 3 % accuracy from a known group of participants, and 89 . 1 % from strangers.
A test of alternative estimators for volume at time 1 from remeasured point samples
Francis A. Roesch; Edwin J. Green; Charles T. Scott
1993-01-01
Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...
Schaufeli, W B; Van Dierendonck, D
1995-06-01
In the present study, burnout scores of three samples, as measured with the Maslach Burnout Inventory, were compared: (1) the normative American sample from the test-manual (N = 10,067), (2) the normative Dutch sample (N = 3,892), and (3) a Dutch outpatient sample (N = 142). Generally, the highest burnout scores were found for the outpatient sample, followed by the American and Dutch normative samples, respectively. Slightly different patterns were noted for each of the three components. Probably sampling bias, i.e., the healthy worker effect, or cultural value patterns, i.e., femininity versus masculinity, might be responsible for the results. It is concluded that extreme caution is required when cut-off points are used to classify individuals by burnout scores; only nation-specific and clinically derived cut-off points should be employed.
Selecting the most appropriate time points to profile in high-throughput studies
Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv
2017-01-01
Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972
Mort, Brendan C; Autschbach, Jochen
2006-08-09
Vibrational corrections (zero-point and temperature dependent) of the H-D spin-spin coupling constant J(HD) for six transition metal hydride and dihydrogen complexes have been computed from a vibrational average of J(HD) as a function of temperature. Effective (vibrationally averaged) H-D distances have also been determined. The very strong temperature dependence of J(HD) for one of the complexes, [Ir(dmpm)Cp*H2]2 + (dmpm = bis(dimethylphosphino)methane) can be modeled simply by the Boltzmann average of the zero-point vibrationally averaged JHD of two isomers. For this complex and four others, the vibrational corrections to JHD are shown to be highly significant and lead to improved agreement between theory and experiment in most cases. The zero-point vibrational correction is important for all complexes. Depending on the shape of the potential energy and J-coupling surfaces, for some of the complexes higher vibrationally excited states can also contribute to the vibrational corrections at temperatures above 0 K and lead to a temperature dependence. We identify different classes of complexes where a significant temperature dependence of J(HD) may or may not occur for different reasons. A method is outlined by which the temperature dependence of the HD spin-spin coupling constant can be determined with standard quantum chemistry software. Comparisons are made with experimental data and previously calculated values where applicable. We also discuss an example where a low-order expansion around the minimum of a complicated potential energy surface appears not to be sufficient for reproducing the experimentally observed temperature dependence.
Phosphorylation of human INO80 is involved in DNA damage tolerance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kato, Dai; Waki, Mayumi; Umezawa, Masaki
Highlights: Black-Right-Pointing-Pointer Depletion of hINO80 significantly reduced PCNA ubiquitination. Black-Right-Pointing-Pointer Depletion of hINO80 significantly reduced nuclear dots intensity of RAD18 after UV irradiation. Black-Right-Pointing-Pointer Western blot analyses showed phosphorylated hINO80 C-terminus. Black-Right-Pointing-Pointer Overexpression of phosphorylation mutant hINO80 reduced PCNA ubiquitination. -- Abstract: Double strand breaks (DSBs) are the most serious type of DNA damage. DSBs can be generated directly by exposure to ionizing radiation or indirectly by replication fork collapse. The DNA damage tolerance pathway, which is conserved from bacteria to humans, prevents this collapse by overcoming replication blockages. The INO80 chromatin remodeling complex plays an important role in themore » DNA damage response. The yeast INO80 complex participates in the DNA damage tolerance pathway. The mechanisms regulating yINO80 complex are not fully understood, but yeast INO80 complex are necessary for efficient proliferating cell nuclear antigen (PCNA) ubiquitination and for recruitment of Rad18 to replication forks. In contrast, the function of the mammalian INO80 complex in DNA damage tolerance is less clear. Here, we show that human INO80 was necessary for PCNA ubiquitination and recruitment of Rad18 to DNA damage sites. Moreover, the C-terminal region of human INO80 was phosphorylated, and overexpression of a phosphorylation-deficient mutant of human INO80 resulted in decreased ubiquitination of PCNA during DNA replication. These results suggest that the human INO80 complex, like the yeast complex, was involved in the DNA damage tolerance pathway and that phosphorylation of human INO80 was involved in the DNA damage tolerance pathway. These findings provide new insights into the DNA damage tolerance pathway in mammalian cells.« less
Turning Points in the Development of Classical Musicians
ERIC Educational Resources Information Center
Gabor, Elena
2011-01-01
This qualitative study investigated the vocational socialization turning points in families of classical musicians. I sampled and interviewed 20 parent-child dyads, for a total of 46 interviews. Data analysis revealed that classical musicians' experiences were marked by 11 turning points that affected their identification with the occupation:…
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B.; Tasnimuzzaman, Md.; Nordland, Andreas; Begum, Anowara; Jensen, Peter K. M.
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae (V. cholerae) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from “point-of-drinking” and “source” in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds (P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14–42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds (p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85–29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19–18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera. PMID:29616005
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B; Tasnimuzzaman, Md; Nordland, Andreas; Begum, Anowara; Jensen, Peter K M
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae ( V. cholerae ) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from "point-of-drinking" and "source" in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds ( P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14-42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds ( p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85-29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19-18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera.
CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
Cloud-point detection using a portable thickness shear mode crystal resonator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansure, A.J.; Spates, J.J.; Germer, J.W.
1997-08-01
The Thickness Shear Mode (TSM) crystal resonator monitors the crude oil by propagating a shear wave into the oil. The coupling of the shear wave and the crystal vibrations is a function of the viscosity of the oil. By driving the crystal with circuitry that incorporates feedback, it is possible to determine the change from Newtonian to non-Newtonian viscosity at the cloud point. A portable prototype TSM Cloud Point Detector (CPD) has performed flawlessly during field and lab tests proving the technique is less subjective or operator dependent than the ASTM standard. The TSM CPD, in contrast to standard viscositymore » techniques, makes the measurement in a closed container capable of maintaining up to 100 psi. The closed container minimizes losses of low molecular weight volatiles, allowing samples (25 ml) to be retested with the addition of chemicals. By cycling/thermal soaking the sample, the effects of thermal history can be investigated and eliminated as a source of confusion. The CPD is portable, suitable for shipping the field offices for use by personnel without special training or experience in cloud point measurements. As such, it can make cloud point data available without the delays and inconvenience of sending samples to special labs. The crystal resonator technology can be adapted to in-line monitoring of cloud point and deposition detection.« less
Zhang, Xue-Lei; Feng, Wan-Wan; Zhong, Guo-Min
2011-01-01
A GIS-based 500 m x 500 m soil sampling point arrangement was set on 248 points at Wenshu Town of Yuzhou County in central Henan Province, where the typical Ustic Cambosols locates. By using soil digital data, the spatial database was established, from which, all the needed latitude and longitude data of the sampling points were produced for the field GPS guide. Soil samples (0-20 cm) were collected from 202 points, of which, bulk density measurement were conducted for randomly selected 34 points, and the ten soil property items used as the factors for soil quality assessment, including organic matter, available K, available P, pH, total N, total P, soil texture, cation exchange capacity (CEC), slowly available K, and bulk density, were analyzed for the other points. The soil property items were checked by statistic tools, and then, classified with standard criteria at home and abroad. The factor weight was given by analytic hierarchy process (AHP) method, and the spatial variation of the major 10 soil properties as well as the soil quality classes and their occupied areas were worked out by Kriging interpolation maps. The results showed that the arable Ustic Cambosols in study area was of good quality soil, over 95% of which ranked in good and medium classes and only less than 5% were in poor class.
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
Yang, Jae-Hyuk; Lim, Hong Chul; Bae, Ji Hoon; Fernandez, Harry; Bae, Tae Soo; Wang, Joon Ho
2011-10-01
Descriptive laboratory study. The femoral anatomic insertion site and the optimal isometric point of popliteus tendon for posterolateral reconstruction are not well known. Purpose of this study was to determine the relative relationship between the femoral anatomic insertion and isometric point of popliteus muscle-tendon complex with the lateral epicondyle of femur. Thirty unpaired cadaveric knees were dissected to determine the anatomic femoral insertion of the popliteus tendon. The distance and the angle from the lateral epicondyle of femur to the center of the anatomic insertion of the popliteus tendon were measured using digital caliper and goniometer. Eight unpaired fresh cadaveric knees were examined to determine the optimal isometric point of femoral insertion of popliteus tendon using computer-controlled motion capture analysis system (Motion Analysis, CA, USA). Distances from targeted tibial tunnel for popliteus tendon reconstruction to the 35 points gained on the lateral surface of femur were recorded at 0, 30, 60, 90, and 120° knee flexion. A point with the least excursion (<2.0 mm) was determined as the isometric point. The center of anatomic insertion points and the optimal isometric point for the main fibers of popliteus tendon were found to be posterior and distal to the lateral epicondyle of femur. The distance from the lateral epicondyle of femur to the center of anatomic femoral insertion of popliteus tendon was 11.3 ± 1.2 mm (mean ± SD). The angle between long axis of femur and the line from lateral epicondyle of femur to anatomic femoral insertion of popliteus tendon was 31.4 ± 5.3°. The isometric points for the femoral insertion of popliteus muscle-tendon complex were situated posterior and distal to the lateral epicondyle in all 8 knees. The distance between the least excursion point and the lateral epicondyle was calculated as 10.4 ± 1.7 mm. The angle between the long axis of femur and the line from lateral epicondyle of femur to optimum isometric point of popliteus tendon was calculated as 41.3 ± 14.9°. The optimal isometric point for the femoral insertion of popliteus muscle-tendon complex is situated posterior and distal to the lateral epicondyle of femur. Femoral tunnel for "posterolateral corner sling procedure" should be placed at this point to achieve least amount of graft excursion during knee motion.
2002-01-01
1-3], a task that is exponen- algorithms to model quantum mechanical systems. tially complex in the number of particles treated and A starting point ...cell size approaches zero). There- tion were presented by Succi and Benzi [10,11] and fore, from the point -of-view of the modeler, there ex- by... point regarding this particular In both cases, the model behaves as expected. gate is that when measurements are periodically made Third, in Section 4
NASA Astrophysics Data System (ADS)
Mukasa, Samuel B.; Dalziel, Ian W. D.
1996-11-01
Zircon U-Pb and muscovite {40Ar }/{39Ar } isotopic ages have been determined on rocks from the southernmost Andes and South Georgia Island, North Scotia Ridge, to provide absolute time constraints on the kinematic evolution of southwestern Gondwanaland, until now known mainly from stratigraphic relations. The U-Pb systematics of four zircon fractions from one sample show that proto-marginal basin magmatism in the northern Scotia arc, creating the peraluminous Darwin granite suite and submarine rhyolite sequences of the Tobifera Formation, had begun by the Middle Jurassic (164.1 ± 1.7 Ma). Seven zircon fractions from two other Darwin granites are discordant with non-linear patterns, suggesting a complex history of inheritances and Pb loss. Reference lines drawn through these points on concordia diagrams give upper intercept ages of ca. 1500 Ma, interpreted as a minimum age for the inherited zircon component. This component is believed to have been derived from sedimentary rocks in the Gondwanaland margin accretionary wedge that forms the basement of the region, or else directly from the cratonic "back stop" of that wedge. Ophiolitic remnants of the Rocas Verdes marginal basin preserved in the Larsen Harbour complex on South Georgia yield the first clear evidence that Gondwanaland fragmentation had resulted in the formation of oceanic crust in the Weddell Sea region by the Late Jurassic (150 ± 1 Ma). The geographic pattern in the observed age range of 8 to 13 million years in these ophiolitic materials, while not definitive, is in keeping with propagation of the marginal basin floor northwestward from South Georgia Island to the Sarmiento Complex in southern Chile. Rocks of the Beagle granite suite, emplaced post-tectonically within the uplifted marginal basin floor, have complex zircon U-Pb systematics with gross discordances dominated by inheritances in some samples and Pb loss in others. Of eleven samples processed, only two had sufficient amounts of zircon for multiple fractions, and only one yielded colinear points. These points lie close to the lower concordia intercept for which the age is 68.9 ± 1.0 Ma, but their upper intercept is not well known. Inasmuch as this age is similar to the {40Ar }/{39Ar } age of secondary muscovite growing in extensional fractures of pulled-apart feldspar phenocrysts in a Beagle suite granitic pluton (plateau age is 68.1 ± 0.4 Ma), we interpret the two dates as good time constraints for cooling following a period of extensional deformation probably related to the tectonic denudation of the highgrade metamorphic complex of Cordillera Darwin in Tierra del Fuego.
3D reconstruction from non-uniform point clouds via local hierarchical clustering
NASA Astrophysics Data System (ADS)
Yang, Jiaqi; Li, Ruibo; Xiao, Yang; Cao, Zhiguo
2017-07-01
Raw scanned 3D point clouds are usually irregularly distributed due to the essential shortcomings of laser sensors, which therefore poses a great challenge for high-quality 3D surface reconstruction. This paper tackles this problem by proposing a local hierarchical clustering (LHC) method to improve the consistency of point distribution. Specifically, LHC consists of two steps: 1) adaptive octree-based decomposition of 3D space, and 2) hierarchical clustering. The former aims at reducing the computational complexity and the latter transforms the non-uniform point set into uniform one. Experimental results on real-world scanned point clouds validate the effectiveness of our method from both qualitative and quantitative aspects.
The effect of different control point sampling sequences on convergence of VMAT inverse planning
NASA Astrophysics Data System (ADS)
Pardo Montero, Juan; Fenwick, John D.
2011-04-01
A key component of some volumetric-modulated arc therapy (VMAT) optimization algorithms is the progressive addition of control points to the optimization. This idea was introduced in Otto's seminal VMAT paper, in which a coarse sampling of control points was used at the beginning of the optimization and new control points were progressively added one at a time. A different form of the methodology is also present in the RapidArc optimizer, which adds new control points in groups called 'multiresolution levels', each doubling the number of control points in the optimization. This progressive sampling accelerates convergence, improving the results obtained, and has similarities with the ordered subset algorithm used to accelerate iterative image reconstruction. In this work we have used a VMAT optimizer developed in-house to study the performance of optimization algorithms which use different control point sampling sequences, most of which fall into three different classes: doubling sequences, which add new control points in groups such that the number of control points in the optimization is (roughly) doubled; Otto-like progressive sampling which adds one control point at a time, and equi-length sequences which contain several multiresolution levels each with the same number of control points. Results are presented in this study for two clinical geometries, prostate and head-and-neck treatments. A dependence of the quality of the final solution on the number of starting control points has been observed, in agreement with previous works. We have found that some sequences, especially E20 and E30 (equi-length sequences with 20 and 30 multiresolution levels, respectively), generate better results than a 5 multiresolution level RapidArc-like sequence. The final value of the cost function is reduced up to 20%, such reductions leading to small improvements in dosimetric parameters characterizing the treatments—slightly more homogeneous target doses and better sparing of the organs at risk.
The source provenance of an obsidian Eden point from Sierra County, New Mexico
Dolan, Sean Gregory; Berryman, Judy; Shackley, M. Steven
2016-01-02
Eden projectile points associated with the Cody complex are underrepresented in the late Paleoindian record of the American Southwest. EDXRF analysis of an obsidian Eden point from a site in Sierra County, New Mexico demonstrates this artifact is from the Cerro del Medio (Valles Rhyolite) source in the Jemez Mountains. Lastly, we contextualize our results by examining variability in obsidian procurement practices beyond the Cody heartland in southcentral New Mexico.
Optofluidic analysis system for amplification-free, direct detection of Ebola infection
NASA Astrophysics Data System (ADS)
Cai, H.; Parks, J. W.; Wall, T. A.; Stott, M. A.; Stambaugh, A.; Alfson, K.; Griffiths, A.; Mathies, R. A.; Carrion, R.; Patterson, J. L.; Hawkins, A. R.; Schmidt, H.
2015-09-01
The massive outbreak of highly lethal Ebola hemorrhagic fever in West Africa illustrates the urgent need for diagnostic instruments that can identify and quantify infections rapidly, accurately, and with low complexity. Here, we report on-chip sample preparation, amplification-free detection and quantification of Ebola virus on clinical samples using hybrid optofluidic integration. Sample preparation and target preconcentration are implemented on a PDMS-based microfluidic chip (automaton), followed by single nucleic acid fluorescence detection in liquid-core optical waveguides on a silicon chip in under ten minutes. We demonstrate excellent specificity, a limit of detection of 0.2 pfu/mL and a dynamic range of thirteen orders of magnitude, far outperforming other amplification-free methods. This chip-scale approach and reduced complexity compared to gold standard RT-PCR methods is ideal for portable instruments that can provide immediate diagnosis and continued monitoring of infectious diseases at the point-of-care.
Novel pH sensing semiconductor for point-of-care detection of HIV-1 viremia
Gurrala, R.; Lang, Z.; Shepherd, L.; Davidson, D.; Harrison, E.; McClure, M.; Kaye, S.; Toumazou, C.; Cooke, G. S.
2016-01-01
The timely detection of viremia in HIV-infected patients receiving antiviral treatment is key to ensuring effective therapy and preventing the emergence of drug resistance. In high HIV burden settings, the cost and complexity of diagnostics limit their availability. We have developed a novel complementary metal-oxide semiconductor (CMOS) chip based, pH-mediated, point-of-care HIV-1 viral load monitoring assay that simultaneously amplifies and detects HIV-1 RNA. A novel low-buffer HIV-1 pH-LAMP (loop-mediated isothermal amplification) assay was optimised and incorporated into a pH sensitive CMOS chip. Screening of 991 clinical samples (164 on the chip) yielded a sensitivity of 95% (in vitro) and 88.8% (on-chip) at >1000 RNA copies/reaction across a broad spectrum of HIV-1 viral clades. Median time to detection was 20.8 minutes in samples with >1000 copies RNA. The sensitivity, specificity and reproducibility are close to that required to produce a point-of-care device which would be of benefit in resource poor regions, and could be performed on an USB stick or similar low power device. PMID:27829667
Automatic initialization for 3D bone registration
NASA Astrophysics Data System (ADS)
Foroughi, Pezhman; Taylor, Russell H.; Fichtinger, Gabor
2008-03-01
In image-guided bone surgery, sample points collected from the surface of the bone are registered to the preoperative CT model using well-known registration methods such as Iterative Closest Point (ICP). These techniques are generally very sensitive to the initial alignment of the datasets. Poor initialization significantly increases the chances of getting trapped local minima. In order to reduce the risk of local minima, the registration is manually initialized by locating the sample points close to the corresponding points on the CT model. In this paper, we present an automatic initialization method that aligns the sample points collected from the surface of pelvis with CT model of the pelvis. The main idea is to exploit a mean shape of pelvis created from a large number of CT scans as the prior knowledge to guide the initial alignment. The mean shape is constant for all registrations and facilitates the inclusion of application-specific information into the registration process. The CT model is first aligned with the mean shape using the bilateral symmetry of the pelvis and the similarity of multiple projections. The surface points collected using ultrasound are then aligned with the pelvis mean shape. This will, in turn, lead to initial alignment of the sample points with the CT model. The experiments using a dry pelvis and two cadavers show that the method can align the randomly dislocated datasets close enough for successful registration. The standard ICP has been used for final registration of datasets.
Naumann, R; Alexander-Weber, Ch; Eberhardt, R; Giera, J; Spitzer, P
2002-11-01
Routine pH measurements are carried out with pH meter-glass electrode assemblies. In most cases the glass and reference electrodes are thereby fashioned into a single probe, the so-called 'combination electrode' or simply 'the pH electrode'. The use of these electrodes is subject to various effects, described below, producing uncertainties of unknown magnitude. Therefore, the measurement of pH of a sample requires a suitable calibration by certified standard buffer solutions (CRMs) traceable to primary pH standards. The procedures in use are based on calibrations at one point, at two points bracketing the sample pH and at a series of points, the so-called multi-point calibration. The multi-point calibration (MPC) is recommended if minimum uncertainty and maximum consistency are required over a wide range of unknown pH values. Details of uncertainty computations for the two-point and MPC procedure are given. Furthermore, the multi-point calibration is a useful tool to characterise the performance of pH electrodes. This is demonstrated with different commercial pH electrodes. ELECTRONIC SUPPLEMENTARY MATERIAL is available if you access this article at http://dx.doi.org/10.1007/s00216-002-1506-5. On that page (frame on the left side), a link takes you directly to the supplementary material.
NASA Astrophysics Data System (ADS)
Espinosa-Garcia, J.
Ab initio molecular orbital theory was used to study parts of the reaction between the CH2Br radical and the HBr molecule, and two possibilities were analysed: attack on the hydrogen and attack on the bromine of the HBr molecule. Optimized geometries and harmonic vibrational frequencies were calculated at the second-order Moller-Plesset perturbation theory levels, and comparison with available experimental data was favourable. Then single-point calculations were performed at several higher levels of calculation. In the attack on the hydrogen of HBr, two stationary points were located on the direct hydrogen abstraction reaction path: a very weak hydrogen bonded complex of reactants, C···HBr, close to the reactants, followed by the saddle point (SP). The effects of level of calculation (method + basis set), spin projection, zeropoint energy, thermal corrections (298K), spin-orbit coupling and basis set superposition error (BSSE) on the energy changes were analysed. Taking the reaction enthalpy (298K) as reference, agreement with experiment was obtained only when high correlation energy and large basis sets were used. It was concluded that at room temperature (i.e., with zero-point energy and thermal corrections), when the BSSE was included, the complex disappears and the activation enthalpy (298K) ranges from 0.8kcal mol-1 to 1.4kcal mol-1 above the reactants, depending on the level of calculation. It was concluded also that this result is the balance of a complicated interplay of many factors, which are affected by uncertainties in the theoretical calculations. Finally, another possible complex (X complex), which involves the alkyl radical being attracted to the halogen end of HBr (C···BrH), was explored also. It was concluded that this X complex does not exist at room temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang Qun; Liu Shuxia, E-mail: liusx@nenu.edu.cn; Liang Dadong
2012-06-15
A series of lanthanide-organic complexes based on polyoxometalates (POMs) [Ln{sub 2}(DNBA){sub 4}(DMF){sub 8}][W{sub 6}O{sub 19}] (Ln=La(1), Ce(2), Sm(3), Eu(4), Gd(5); DNBA=3,5-dinitrobenzoate; DMF=N,N-dimethylformamide) has been synthesized. These complexes consist of [W{sub 6}O{sub 19}]{sup 2-} and dimeric [Ln{sub 2}(DNBA){sub 4}(DMF){sub 8}]{sup 2+} cations. The luminescence properties of 4 are measured in solid state and different solutions, respectively. Notably, the emission intensity increases gradually with the increase of solvent permittivity, and this solvent effect can be directly observed by electrospray mass spectrometry (ESI-MS). The analyses of ESI-MS show that the eight coordinated solvent DMF units of dimeric cation are active. They can movemore » away from dimeric cations and exchange with solvent molecules. Although the POM anions escape from 3D supramolecular network, the dimeric state structure of [Ln{sub 2}(DNBA){sub 4}]{sup 2+} remains unchanged in solution. The conservation of red luminescence is attributed to the maintenance of the aggregated state structures of dimeric cations. - Graphical abstract: 3D POMs-based lanthanide-organic complexes performed the solvent effect on the luminescence property. The origin of such solvent effect can be understood and explained on the basis of the existence of coordinated active sites by the studies of ESI-MS. Highlights: Black-Right-Pointing-Pointer The solvent effect on the luminescence property of POMs-based lanthanide-organic complexes. Black-Right-Pointing-Pointer ESI-MS analyses illuminate the correlation between the structure and luminescence property. Black-Right-Pointing-Pointer The dimeric cations have eight active sites of solvent coordination. Black-Right-Pointing-Pointer The aggregated state structure of dimer cation remains unchanged in solution. Black-Right-Pointing-Pointer Luminescence associating with ESI-MS is a new method for investigating the interaction of complex and solvent.« less
Locating CVBEM collocation points for steady state heat transfer problems
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.
On the complexity of a combined homotopy interior method for convex programming
NASA Astrophysics Data System (ADS)
Yu, Bo; Xu, Qing; Feng, Guochen
2007-03-01
In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.
Tunnelling with a negative cosmological constant
NASA Astrophysics Data System (ADS)
Gibbons, G. W.
1996-02-01
The point of this paper is to see what light new results in hyperbolic geometry may throw on gravitational entropy and whether gravitational entropy is relevant for the quantum origin of the universe. We introduce some new gravitational instantons which mediate the birth from nothing of closed universes containing wormholes and suggest that they may contribute to the density matrix of the universe. We also discuss the connection between their gravitational action and the topological and volumetric entropies introduced in hyperbolic geometry. These coincide for hyperbolic 4-manifolds, and increase with increasing topological complexity of the 4-manifold. We raise the question of whether the action also increases with the topological complexity of the initial 3-geometry, measured either by its 3-volume or its Matveev complexity. We point out, in distinction to the non-supergravity case, that universes with domains of negative cosmological constant separated by supergravity domain walls cannot be born from nothing. Finally we point out that our wormholes provide examples of the type of Perpetual Motion machines envisaged by Frolov and Novikov.
The Jeanie Point complex revisited
Dumoulin, Julie A.; Miller, Martha L.
1984-01-01
The so-called Jeanie Point complex is a distinctive package of rocks within the Orca Group, a Tertiary turbidite sequence. The rocks crop out on the southeast coast of Montague Island, Prince William Sound, approximately 3 km northeast of Jeanie Point (loc. 7, fig. 44). These rocks consist dominantly of fine-grained limestone and lesser amounts of siliceous limestone, chert, tuff, mudstone, argillite, and sandstone (fig. 47). The Jeanie Point rocks also differ from those typical of the Orca Group in their fold style. Thus, the Orca Group of the area is isoclinally folded on a large scale (tens to hundreds of meters), whereas the Jeanie Point rocks are tightly folded on a 1- to 3- m-wavelength scale (differences in rock competency may be responsible for this variation in fold style).
Holliday, Trenton W; Hilton, Charles E
2010-06-01
Given the well-documented fact that human body proportions covary with climate (presumably due to the action of selection), one would expect that the Ipiutak and Tigara Inuit samples from Point Hope, Alaska, would be characterized by an extremely cold-adapted body shape. Comparison of the Point Hope Inuit samples to a large (n > 900) sample of European and European-derived, African and African-derived, and Native American skeletons (including Koniag Inuit from Kodiak Island, Alaska) confirms that the Point Hope Inuit evince a cold-adapted body form, but analyses also reveal some unexpected results. For example, one might suspect that the Point Hope samples would show a more cold-adapted body form than the Koniag, given their more extreme environment, but this is not the case. Additionally, univariate analyses seldom show the Inuit samples to be more cold-adapted in body shape than Europeans, and multivariate cluster analyses that include a myriad of body shape variables such as femoral head diameter, bi-iliac breadth, and limb segment lengths fail to effectively separate the Inuit samples from Europeans. In fact, in terms of body shape, the European and the Inuit samples tend to be cold-adapted and tend to be separated in multivariate space from the more tropically adapted Africans, especially those groups from south of the Sahara. Copyright 2009 Wiley-Liss, Inc.
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skraba, Primoz; Rosen, Paul; Wang, Bei
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with amore » guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. Here, we apply our method to synthetic and simulation datasets to demonstrate its effectiveness.« less
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion.
Skraba, Primoz; Rosen, Paul; Wang, Bei; Chen, Guoning; Bhatia, Harsh; Pascucci, Valerio
2016-02-29
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with a guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. We apply our method to synthetic and simulation datasets to demonstrate its effectiveness.
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion
Skraba, Primoz; Rosen, Paul; Wang, Bei; ...
2016-02-29
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with amore » guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. Here, we apply our method to synthetic and simulation datasets to demonstrate its effectiveness.« less
The Unicellular State as a Point Source in a Quantum Biological System
Torday, John S.; Miller, William B.
2016-01-01
A point source is the central and most important point or place for any group of cohering phenomena. Evolutionary development presumes that biological processes are sequentially linked, but neither directed from, nor centralized within, any specific biologic structure or stage. However, such an epigenomic entity exists and its transforming effects can be understood through the obligatory recapitulation of all eukaryotic lifeforms through a zygotic unicellular phase. This requisite biological conjunction can now be properly assessed as the focal point of reconciliation between biology and quantum phenomena, illustrated by deconvoluting complex physiologic traits back to their unicellular origins. PMID:27240413
Change in the Embedding Dimension as an Indicator of an Approaching Transition
Neuman, Yair; Marwan, Norbert; Cohen, Yohai
2014-01-01
Predicting a transition point in behavioral data should take into account the complexity of the signal being influenced by contextual factors. In this paper, we propose to analyze changes in the embedding dimension as contextual information indicating a proceeding transitive point, called OPtimal Embedding tRANsition Detection (OPERAND). Three texts were processed and translated to time-series of emotional polarity. It was found that changes in the embedding dimension proceeded transition points in the data. These preliminary results encourage further research into changes in the embedding dimension as generic markers of an approaching transition point. PMID:24979691
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
Automated Parameter Studies Using a Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosimis, Michael J.; Nemec, Marian
2004-01-01
Computational Fluid Dynamics (CFD) is now routinely used to analyze isolated points in a design space by performing steady-state computations at fixed flight conditions (Mach number, angle of attack, sideslip), for a fixed geometric configuration of interest. This "point analysis" provides detailed information about the flowfield, which aides an engineer in understanding, or correcting, a design. A point analysis is typically performed using high fidelity methods at a handful of critical design points, e.g. a cruise or landing configuration, or a sample of points along a flight trajectory.
Crossfit analysis: a novel method to characterize the dynamics of induced plant responses.
Jansen, Jeroen J; van Dam, Nicole M; Hoefsloot, Huub C J; Smilde, Age K
2009-12-16
Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples.
Crossfit analysis: a novel method to characterize the dynamics of induced plant responses
2009-01-01
Background Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. Results This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Conclusions Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples. PMID:20015363
The case for planetary sample return missions. 2. History of Mars.
Gooding, J L; Carr, M H; McKay, C P
1989-08-01
Principal science goals for exploration of Mars are to establish the chemical, isotopic, and physical state of Martian material, the nature of major surface-forming processes and their time scales, and the past and present biological potential of the planet. Many of those goals can only be met by detailed analyses of atmospheric gases and carefully selected samples of fresh rocks, weathered rocks, soils, sediments, and ices. The high-fidelity mineral separations, complex chemical treatments, and ultrasensitive instrument systems required for key measurements, as well as the need to adapt analytical strategies to unanticipated results, point to Earth-based laboratory analyses on returned Martian samples as the best means for meeting the stated objectives.
Stability analysis of an autocatalytic protein model
NASA Astrophysics Data System (ADS)
Lee, Julian
2016-05-01
A self-regulatory genetic circuit, where a protein acts as a positive regulator of its own production, is known to be the simplest biological network with a positive feedback loop. Although at least three components—DNA, RNA, and the protein—are required to form such a circuit, stability analysis of the fixed points of this self-regulatory circuit has been performed only after reducing the system to a two-component system, either by assuming a fast equilibration of the DNA component or by removing the RNA component. Here, stability of the fixed points of the three-component positive feedback loop is analyzed by obtaining eigenvalues of the full three-dimensional Hessian matrix. In addition to rigorously identifying the stable fixed points and saddle points, detailed information about the system can be obtained, such as the existence of complex eigenvalues near a fixed point.
Registration algorithm of point clouds based on multiscale normal features
NASA Astrophysics Data System (ADS)
Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua
2015-01-01
The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.
NASA Technical Reports Server (NTRS)
Rimskiy-Korsakov, A. V.; Belousov, Y. I.
1973-01-01
A program was compiled for calculating acoustical pressure levels, which might be created by vibrations of complex structures (an assembly of shells and rods), under the influence of a given force, for cases when these fields cannot be measured directly. The acoustical field is determined according to transition frequency and pulse characteristics of the structure in the projection mode. Projection characteristics are equal to the reception characteristics, for vibrating systems in which the reciprocity principle holds true. Characteristics in the receiving mode are calculated on the basis of experimental data on a point pulse space velocity source (input signal) and vibration response of the structure (output signal). The space velocity of a pulse source, set at a point in space r, where it is necessary to calculate the sound field of the structure p(r,t), is determined by measurements of acoustic pressure, created by a point source at a distance R. The vibration response is measured at the point where the forces F and f exciting the system should act.
Navigable points estimation for mobile robots using binary image skeletonization
NASA Astrophysics Data System (ADS)
Martinez S., Fernando; Jacinto G., Edwar; Montiel A., Holman
2017-02-01
This paper describes the use of image skeletonization for the estimation of all the navigable points, inside a scene of mobile robots navigation. Those points are used for computing a valid navigation path, using standard methods. The main idea is to find the middle and the extreme points of the obstacles in the scene, taking into account the robot size, and create a map of navigable points, in order to reduce the amount of information for the planning algorithm. Those points are located by means of the skeletonization of a binary image of the obstacles and the scene background, along with some other digital image processing algorithms. The proposed algorithm automatically gives a variable number of navigable points per obstacle, depending on the complexity of its shape. As well as, the way how the algorithm can change some of their parameters in order to change the final number of the resultant key points is shown. The results shown here were obtained applying different kinds of digital image processing algorithms on static scenes.
Language skills of children during the first 12 months after stuttering onset.
Watts, Amy; Eadie, Patricia; Block, Susan; Mensah, Fiona; Reilly, Sheena
2017-03-01
To describe the language development in a sample of young children who stutter during the first 12 months after stuttering onset was reported. Language production was analysed in a sample of 66 children who stuttered (aged 2-4 years). The sample were identified from a pre-existing prospective, community based longitudinal cohort. Data were collected at three time points within the first year after stuttering onset. Stuttering severity was measured, and global indicators of expressive language proficiency (length of utterances and grammatical complexity) were derived from the samples and summarised. Language production abilities of the children who stutter were contrasted with normative data. The majority of children's stuttering was rated as mild in severity, with more than 83% of participants demonstrating very mild or mild stuttering at each of the time points studied. The participants demonstrated developmentally appropriate spoken language skills comparable with available normative data. In the first year following the report of stuttering onset, the language skills of the children who were stuttering progressed in a manner that is consistent with developmental expectations. Copyright © 2016 Elsevier Inc. All rights reserved.
Complexity and Chaos - State-of-the-Art; Glossary
2007-09-01
when we think about emergence we are, in our mind’s eye , moving between different vantage points. We see the trees and the forest at DRDC Valcartier TN...permit simple yes/no categorisations (e.g. colour ). Can also be used to make decisions where uncertainty occurs (fuzzy control). This is a form of...a specific complex formula across space by colour coding the result of each starting point as convergent or divergent, generating a fractal boundary
Defect states of complexes involving a vacancy on the boron site in boronitrene
NASA Astrophysics Data System (ADS)
Ngwenya, T. B.; Ukpong, A. M.; Chetty, N.
2011-12-01
First principles calculations have been performed to investigate the ground state properties of freestanding monolayer hexagonal boronitrene (h-BN). We have considered monolayers that contain native point defects and their complexes, which form when the point defects bind with the boron vacancy on the nearest-neighbor position. The changes in the electronic structure are analyzed to show the extent of localization of the defect-induced midgap states. The variations in formation energies suggest that defective h-BN monolayers that contain carbon substitutional impurities are the most stable structures, irrespective of the changes in growth conditions. The high energies of formation of the boron vacancy complexes suggest that they are less stable, and their creation by ion bombardment would require high-energy ions compared to point defects. Using the relative positions of the derived midgap levels for the double vacancy complex, it is shown that the quasi-donor-acceptor pair interpretation of optical transitions is consistent with stimulated transitions between electron and hole states in boronitrene.
Diffusion and binding analyzed with combined point FRAP and FCS.
Im, Kang-Bin; Schmidt, Ute; Kang, Moon-Sik; Lee, Ji-Young; Bestvater, Felix; Wachsmuth, Malte
2013-09-01
To quantify more precisely and more reliably diffusion and reaction properties of biomolecules in living cells, a novel closed description in 3D of both the bleach and the post-bleach segment of fluorescence recovery after photobleaching (FRAP) data acquired at a point, i.e., a diffraction-limited observation area, termed point FRAP, is presented. It covers a complete coupled reaction-diffusion scheme for mobile molecules undergoing transient or long-term immobilization because of binding. We assess and confirm the feasibility with numerical solutions of the differential equations. By applying this model to free EYFP expressed in HeLa cells using a customized confocal laser scanning microscope that integrates point FRAP and fluorescence correlation spectroscopy (FCS), the applicability is validated by comparison with results from FCS. We show that by taking diffusion during bleaching into consideration and/or by employing a global analysis of series of bleach times, the results can be improved significantly. As the point FRAP approach allows to obtain data with diffraction-limited positioning accuracy, diffusion and binding properties of the exon-exon junction complex (EJC) components REF2-II and Magoh are obtained at different localizations in the nucleus of MCF7 cells and refine our view on the position-dependent association of the EJC factors with a maturating mRNP complex. Our findings corroborate the concept of combining point FRAP and FCS for a better understanding of the underlying diffusion and binding processes. Copyright © 2013 International Society for Advancement of Cytometry.
Hęś, Marzanna; Gliszczyńska-Świgło, Anna; Gramza-Michałowska, Anna
2017-01-01
Plants are an important source of phenolic compounds. The antioxidant capacities of green tea, thyme and rosemary extracts that contain these compounds have been reported earlier. However, there is a lack of accessible information about their activity against lipid oxidation in emulsions and inhibit the interaction of lipid oxidation products with amino acids. Therefore, the influence of green tea, thyme and rosemary extracts and BHT (butylated hydroxytoluene) on quantitative changes in lysine and methionine in linoleic acid emulsions at a pH of isoelectric point and a pH lower than the isoelectric point of amino acids was investigated. Total phenolic contents in plant extracts were determined spectrophotometrically by using Folin-Ciocalteu's reagent, and individual phenols by using HPLC. The level of oxidation of emulsion was determined using the measurement of peroxides and TBARS (thiobarbituric acid reactive substances). Methionine and lysine in the system were reacted with sodium nitroprusside and trinitrobenzenesulphonic acid respectively, and the absorbance of the complexes was measured. Extract of green tea had the highest total polyphenol content. The system containing antioxidants and amino acid protected linoleic acid more efficiently than by the addition of antioxidants only. Lysine and methionine losses in samples without the addition of antioxidants were lower in their isoelectric points than below these points. Antioxidants decrease the loss of amino acids. The protective properties of antioxidants towards methionine were higher in a pH of isoelectric point whereas towards lysine in pH below this point. Green tea, thyme and rosemary extracts exhibit antioxidant activity in linoleic acid emulsions. Moreover, they can be utilized to inhibit quantitative changes in amino acids in lipid emulsions. However, the antioxidant efficiency of these extracts seems to depend on pH conditions. Further investigations should be carried out to clarify this issue.
Effect of black point on accuracy of LCD displays colorimetric characterization
NASA Astrophysics Data System (ADS)
Li, Tong; Xie, Kai; He, Nannan; Ye, Yushan
2018-03-01
Black point is the point at which RGB's single channel digital drive value is 0. Due to the problem of light leakage of liquid-crystal displays (LCDs), black point's luminance value is not 0, this phenomenon bring some errors to colorimetric characterization of LCDs, especially low luminance value driving greater sampling effect. This paper describes the characteristic accuracy of polynomial model method and the effect of black point on accuracy, the color difference accuracy is given. When considering the black point in the characteristics equation, the maximum color difference is 3.246, the maximum color difference than without considering the black points reduced by 2.36. The experimental results show that the accuracy of LCDs colorimetric characterization can be improved, if the effect of black point is eliminated properly.
An Analytical Study on an Orthodontic Index: Index of Complexity, Outcome and Need (ICON)
Torkan, Sepide; Pakshir, Hamid Reza; Fattahi, Hamid Reza; Oshagh, Morteza; Momeni Danaei, Shahla; Salehi, Parisa; Hedayati, Zohreh
2015-01-01
Statement of the Problem The validity of the Index of Complexity, Outcome and Need (ICON) which is an orthodontic index developed and introduced in 2000 should be studied in different ethnic groups. Purpose The aim of this study was to perform an analysis on the ICON and to verify whether this index is valid for assessing both the need and complexity of orthodontic treatment in Iran. Materials and Method Five orthodontists were asked to score pre-treatment diagnostic records of 100 patients with a uniform distribution of different types of malocclusions determined by Dental Health Component of the Index of Treatment Need. A calibrated examiner also assessed the need for orthodontic treatment and complexity of the cases based on the ICON index as well as the Index of Orthodontic Treatment Need (IOTN). 10 days later, 25% of the cases were re-scored by the panel of experts and the calibrated orthodontist. Results The weighted kappa revealed the inter-examiner reliability of the experts to be 0.63 and 0.51 for the need and complexity components, respectively. ROC curve was used to assess the validity of the index. A new cut-off point was adjusted at 35 in lieu of 43 as the suggested cut-off point. This cut-off point showed the highest level of sensitivity and specificity in our society for orthodontic treatment need (0.77 and 0.78, respectively), but it failed to define definite ranges for the complexity of treatment. Conclusion ICON is a valid index in assessing the need for treatment in Iran when the cut-off point is adjusted to 35. As for complexity of treatment, the index is not validated for our society. It seems that ICON is a well-suited substitute for the IOTN index. PMID:26331142
Gaikowski, M.P.; Larson, W.J.; Steuer, J.J.; Gingerich, W.H.
2004-01-01
Accurate estimates of drug concentrations in hatchery effluent are critical to assess the environmental risk of hatchery drug discharge resulting from disease treatment. This study validated two dilution simple n models to estimate chloramine-T environmental introduction concentrations by comparing measured and predicted chloramine-T concentrations using the US Geological Survey's Upper Midwest Environmental Sciences Center aquaculture facility effluent as an example. The hydraulic characteristics of our treated raceway and effluent and the accuracy of our water flow rate measurements were confirmed with the marker dye rhodamine WT. We also used the rhodamine WT data to develop dilution models that would (1) estimate the chloramine-T concentration at a given time and location in the effluent system and (2) estimate the average chloramine-T concentration at a given location over the entire discharge period. To test our models, we predicted the chloramine-T concentration at two sample points based on effluent flow and the maintenance of chloramine-T at 20 mg/l for 60 min in the same raceway used with rhodamine WT. The effluent sample points selected (sample points A and B) represented 47 and 100% of the total effluent flow, respectively. Sample point B is-analogous to the discharge of a hatchery that does not have a detention lagoon, i.e. The sample site was downstream of the last dilution water addition following treatment. We then applied four chloramine-T flow-through treatments at 20mg/l for 60 min and measured the chloramine-T concentration in water samples collected every 15 min for about 180 min from the treated raceway and sample points A and B during and after application. The predicted chloramine-T concentration at each sampling interval was similar to the measured chloramine-T concentration at sample points A and B and was generally bounded by the measured 90% confidence intervals. The predicted aver,age chloramine-T concentrations at sample points A or B (2.8 and 1.3 mg/l, respectively) were not significantly different (P > 0.05) from the average measured chloramine-T concentrations (2.7 and 1.3 mg/l, respectively). The close agreement between our predicted and measured chloramine-T concentrations indicate either of the dilution models could be used to adequately predict the chloramine-T environmental introduction concentration in Upper Midwest Environmental Sciences Center effluent. (C) 2003 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin Jing; Han Xiao; Meng Qin
2013-01-15
Five Cd(II)/Zn(II) complexes [Cd(1,2-bdc)(pz){sub 2}(H{sub 2}O)]{sub n} (1), [Cd1Cd2(btec)(H{sub 2}O){sub 6}]{sub n} (2), [Cd(3,4-pdc) (H{sub 2}O)]{sub n} (3), [Zn(2,5-pdc)(H{sub 2}O){sub 4}]{center_dot}2H{sub 2}O (4) and {l_brace} [Zn(2,5-pdc)(H{sub 2}O){sub 2}]{center_dot}H{sub 2}O{r_brace} {sub n} (5) (H{sub 2}bdc=1,2-benzenedicarboxylic acid, pz=pyrazole, H{sub 4}btec=1,2,4,5-benzenetetracarboxylic acid, H{sub 2}pdc=pyridine-dicarboxylic acid) were hydrothermally synthesized and characterized by single-crystal X-ray diffraction, surface photovoltage spectroscopy, XRD, TG analysis, IR and UV-vis spectra and elemental analysis. Structural analyses show that complexes 1-3 are 1D, 2D and 3D Cd(II) coordination polymers, respectively. Complex 4 is a mononuclear Zn(II) complex. Complex 5 is a 3D Zn(II) coordination polymer. The surface photoelectric properties of complexesmore » were investigated by SPS. The results indicate that all complexes exhibit photoelectric responses in the range of 300-600 nm, which reveals that they all possess certain photoelectric conversion properties. By the comparative analyses, it can be found that the species and coordination micro-environment of central metal ion, the species and property of ligands affect the intensity and scope of photoelectric response. - Graphical abstract: Five Cd(II)/Zn(II) complexes have been hydrothermally synthesized and characterized. The photoelectric properties were studied with SPS. The species and coordination micro-environment of central metal ion, the species and property of ligands all affect the photoelectric responses. Highlights: Black-Right-Pointing-Pointer Five Cd/Zn complexes have been synthesized and characterized. Black-Right-Pointing-Pointer The SPS results indicate they possess obvious photoelectric conversion property. Black-Right-Pointing-Pointer The species and coordination environment of central metal ion affect SPS. Black-Right-Pointing-Pointer The species and property of ligands affect SPS. Black-Right-Pointing-Pointer By the energy-band theory and the crystal filed theory, the SPS are analyzed and assigned.« less
NASA Astrophysics Data System (ADS)
Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.
2017-09-01
This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.
Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O
2017-09-01
This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Ultrasonic emissions during ice nucleation and propagation in plant xylem.
Charrier, Guillaume; Pramsohler, Manuel; Charra-Vaskou, Katline; Saudreau, Marc; Améglio, Thierry; Neuner, Gilbert; Mayr, Stefan
2015-08-01
Ultrasonic acoustic emission analysis enables nondestructive monitoring of damage in dehydrating or freezing plant xylem. We studied acoustic emissions (AE) in freezing stems during ice nucleation and propagation, by combining acoustic and infrared thermography techniques and controlling the ice nucleation point. Ultrasonic activity in freezing samples of Picea abies showed two distinct phases: the first on ice nucleation and propagation (up to 50 AE s(-1) ; reversely proportional to the distance to ice nucleation point), and the second (up to 2.5 AE s(-1) ) after dissipation of the exothermal heat. Identical patterns were observed in other conifer and angiosperm species. The complex AE patterns are explained by the low water potential of ice at the ice-liquid interface, which induced numerous and strong signals. Ice propagation velocities were estimated via AE (during the first phase) and infrared thermography. Acoustic activity ceased before the second phase probably because the exothermal heating and the volume expansion of ice caused decreasing tensions. Results indicate cavitation events at the ice front leading to AE. Ultrasonic emission analysis enabled new insights into the complex process of xylem freezing and might be used to monitor ice propagation in natura. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Optimal Ventilation Control in Complex Urban Tunnels with Multi-Point Pollutant Discharge
DOT National Transportation Integrated Search
2017-10-01
Zhen Tan (ORCID ID 0000-0003-1711-3557) H. Oliver Gao (ORCID ID 0000-0002-7861-9634) We propose an optimal ventilation control model for complex urban vehicular tunnels with distributed pollutant discharge points. The control problem is formulated as...
Smink, Douglas S; Peyre, Sarah E; Soybel, David I; Tavakkolizadeh, Ali; Vernon, Ashley H; Anastakis, Dimitri J
2012-04-01
Experts become automated when performing surgery, making it difficult to teach complex procedures to trainees. Cognitive task analysis (CTA) enables experts to articulate operative steps and cognitive decisions in complex procedures such as laparoscopic appendectomy, which can then be used to identify central teaching points. Three local surgeon experts in laparoscopic appendectomy were interviewed using critical decision method-based CTA methodology. Interview transcripts were analyzed, and a cognitive demands table (CDT) was created for each expert. The individual CDTs were reviewed by each expert for completeness and then combined into a master CDT. Percentage agreement on operative steps and decision points was calculated for each expert. The experts then participated in a consensus meeting to review the master CDT. Each surgeon expert was asked to identify in the master CDT the most important teaching objectives for junior-level and senior-level residents. The experts' responses for junior-level and senior-level residents were compared using a χ(2) test. The surgeon experts identified 24 operative steps and 27 decision points. Eighteen of the 24 operative steps (75%) were identified by all 3 surgeon experts. The percentage of operative steps identified was high for each surgeon expert (96% for surgeon 1, 79% for surgeon 2, and 83% for surgeon 3). Of the 27 decision points, only 5 (19%) were identified by all 3 surgeon experts. The percentage of decision points identified varied by surgeon expert (78% for surgeon 1, 59% for surgeon 2, and 48% for surgeon 3). When asked to identify key teaching points, the surgeon experts were more likely to identify operative steps for junior residents (9 operative steps and 6 decision points) and decision points for senior residents (4 operative steps and 13 decision points) (P < .01). CTA can deconstruct the essential operative steps and decision points associated with performing a laparoscopic appendectomy. These results provide a framework to identify key teaching principles to guide intraoperative instruction. These learning objectives could be used to guide resident level-appropriate teaching of an essential general surgery procedure. Copyright © 2012 Elsevier Inc. All rights reserved.
Accurate determination of complex materials coefficients of piezoelectric resonators.
Du, Xiao-Hong; Wang, Qing-Ming; Uchino, Kenji
2003-03-01
This paper presents a method of accurately determining the complex piezoelectric and elastic coefficients of piezoelectric ceramic resonators from the measurement of the normalized electric admittance, Y, which is electric admittance Y of piezoelectric resonator normalized by the angular frequency omega. The coefficients are derived from the measurements near three special frequency points that correspond to the maximum and the minimum normalized susceptance (B) and the maximum normalized conductance (G). The complex elastic coefficient is determined from the frequencies at these points, and the real and imaginary parts of the piezoelectric coefficient are related to the derivative of the susceptance with respect to the frequency and the asymmetry of the conductance, respectively, near the maximum conductance point. The measurements for some lead zirconate titanate (PZT) based ceramics are used as examples to demonstrate the calculation and experimental procedures and the comparisons with the standard methods.
Hierarchical extraction of urban objects from mobile laser scanning data
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia
2015-01-01
Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.
A proof of the Woodward-Lawson sampling method for a finite linear array
NASA Technical Reports Server (NTRS)
Somers, Gary A.
1993-01-01
An extension of the continuous aperture Woodward-Lawson sampling theorem has been developed for a finite linear array of equidistant identical elements with arbitrary excitations. It is shown that by sampling the array factor at a finite number of specified points in the far field, the exact array factor over all space can be efficiently reconstructed in closed form. The specified sample points lie in real space and hence are measurable provided that the interelement spacing is greater than approximately one half of a wavelength. This paper provides insight as to why the length parameter used in the sampling formulas for discrete arrays is larger than the physical span of the lattice points in contrast with the continuous aperture case where the length parameter is precisely the physical aperture length.
Point detection of bacterial and viral pathogens using oral samples
NASA Astrophysics Data System (ADS)
Malamud, Daniel
2008-04-01
Oral samples, including saliva, offer an attractive alternative to serum or urine for diagnostic testing. This is particularly true for point-of-use detection systems. The various types of oral samples that have been reported in the literature are presented here along with the wide variety of analytes that have been measured in saliva and other oral samples. The paper focuses on utilizing point-detection of infectious disease agents, and presents work from our group on a rapid test for multiple bacterial and viral pathogens by monitoring a series of targets. It is thus possible in a single oral sample to identify multiple pathogens based on specific antigens, nucleic acids, and host antibodies to those pathogens. The value of such a technology for detecting agents of bioterrorism at remote sites is discussed.
Wigner surmises and the two-dimensional homogeneous Poisson point process.
Sakhr, Jamal; Nieminen, John M
2006-04-01
We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.
NASA Astrophysics Data System (ADS)
Giovanis, D. G.; Shields, M. D.
2018-07-01
This paper addresses uncertainty quantification (UQ) for problems where scalar (or low-dimensional vector) response quantities are insufficient and, instead, full-field (very high-dimensional) responses are of interest. To do so, an adaptive stochastic simulation-based methodology is introduced that refines the probability space based on Grassmann manifold variations. The proposed method has a multi-element character discretizing the probability space into simplex elements using a Delaunay triangulation. For every simplex, the high-dimensional solutions corresponding to its vertices (sample points) are projected onto the Grassmann manifold. The pairwise distances between these points are calculated using appropriately defined metrics and the elements with large total distance are sub-sampled and refined. As a result, regions of the probability space that produce significant changes in the full-field solution are accurately resolved. An added benefit is that an approximation of the solution within each element can be obtained by interpolation on the Grassmann manifold. The method is applied to study the probability of shear band formation in a bulk metallic glass using the shear transformation zone theory.
Baig, Jameel A; Kazi, Tasneem G; Shah, Abdul Q; Arain, Mohammad B; Afridi, Hassan I; Kandhro, Ghulam A; Khan, Sumaira
2009-09-28
The simple and rapid pre-concentration techniques viz. cloud point extraction (CPE) and solid phase extraction (SPE) were applied for the determination of As(3+) and total inorganic arsenic (iAs) in surface and ground water samples. The As(3+) was formed complex with ammonium pyrrolidinedithiocarbamate (APDC) and extracted by surfactant-rich phases in the non-ionic surfactant Triton X-114, after centrifugation the surfactant-rich phase was diluted with 0.1 mol L(-1) HNO(3) in methanol. While total iAs in water samples was adsorbed on titanium dioxide (TiO(2)); after centrifugation, the solid phase was prepared to be slurry for determination. The extracted As species were determined by electrothermal atomic absorption spectrometry. The multivariate strategy was applied to estimate the optimum values of experimental factors for the recovery of As(3+) and total iAs by CPE and SPE. The standard addition method was used to validate the optimized methods. The obtained result showed sufficient recoveries for As(3+) and iAs (>98.0%). The concentration factor in both cases was found to be 40.
NASA Astrophysics Data System (ADS)
Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael
2017-09-01
Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
NASA Astrophysics Data System (ADS)
Hamalainen, Sampsa; Geng, Xiaoyuan; He, Juanxia
2017-04-01
Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping. Sampsa Hamalainen, Xiaoyuan Geng, and Juanxia, He. AAFC - Agriculture and Agr-Food Canada, Ottawa, Canada. The Latin Hypercube Sampling (LHS) approach to assist with Digital Soil Mapping has been developed for some time now, however the purpose of this work was to complement LHS with use of multiple spatial resolutions of covariate datasets and variability in the range of sampling points produced. This allowed for specific sets of LHS points to be produced to fulfil the needs of various partners from multiple projects working in the Ontario and Prince Edward Island provinces of Canada. Secondary soil and environmental attributes are critical inputs that are required in the development of sampling points by LHS. These include a required Digital Elevation Model (DEM) and subsequent covariate datasets produced as a result of a Digital Terrain Analysis performed on the DEM. These additional covariates often include but are not limited to Topographic Wetness Index (TWI), Length-Slope (LS) Factor, and Slope which are continuous data. The range of specific points created in LHS included 50 - 200 depending on the size of the watershed and more importantly the number of soil types found within. The spatial resolution of covariates included within the work ranged from 5 - 30 m. The iterations within the LHS sampling were run at an optimal level so the LHS model provided a good spatial representation of the environmental attributes within the watershed. Also, additional covariates were included in the Latin Hypercube Sampling approach which is categorical in nature such as external Surficial Geology data. Some initial results of the work include using a 1000 iteration variable within the LHS model. 1000 iterations was consistently a reasonable value used to produce sampling points that provided a good spatial representation of the environmental attributes. When working within the same spatial resolution for covariates, however only modifying the desired number of sampling points produced, the change of point location portrayed a strong geospatial relationship when using continuous data. Access to agricultural fields and adjacent land uses is often "pinned" as the greatest deterrent to performing soil sampling for both soil survey and soil attribute validation work. The lack of access can be a result of poor road access and/or difficult geographical conditions to navigate for field work individuals. This seems a simple yet continuous issue to overcome for the scientific community and in particular, soils professionals. The ability to assist with the ease of access to sampling points will be in the future a contribution to the Latin Hypercube Sampling (LHS) approach. By removing all locations in the initial instance from the DEM, the LHS model can be restricted to locations only with access from the adjacent road or trail. To further the approach, a road network geospatial dataset can be included within spatial Geographic Information Systems (GIS) applications to access already produced points using a shortest-distance network method.
Smith, W.P.; Wiedenfeld, D.A.; Hanel, P.B.; Twedt, D.J.; Ford, R.P.; Cooper, R.J.; Smith, Winston Paul
1993-01-01
To quantify efficacy of point count sampling in bottomland hardwood forests, we examined the influence of point count duration on corresponding estimates of number of individuals and species recorded. To accomplish this we conducted a totalof 82 point counts 7 May-16 May 1992distributed among three habitats (Wet, Mesic, Dry) in each of three regions within the lower Mississippi Alluvial Valley (MAV). Each point count consisted of recording the number of individual birds (all species) seen or heard during the initial three minutes and per each minute thereafter for a period totaling ten minutes. In addition, we included 384 point counts recorded during an 8-week period in each of 3 years (1985-1987) among 56 randomly-selected forest patches within the bottomlands of western Tennessee. Each point count consisted of recording the number of individuals (excluding migrating species) during each of four, 5 minute intervals for a period totaling 20 minutes. To estimate minimum sample size, we determined sampling variation at each level (region, habitat, and locality) with the 82 point counts from the lower (MAV) and applied the procedures of Neter and Wasserman (1974:493; Applied linear statistical models). Neither the cumulative number of individuals nor number of species per sampling interval attained an asymptote after 10 or 20 minutes of sampling. For western Tennessee bottomlands, total individual and species counts relative to point count duration were similar among years and comparable to the pattern observed throughout the lower MAV. Across the MAV, we recorded a total of 1,62 1 birds distributed among 52 species with the majority (8721/1621) representing 8 species. More birds were recorded within 25-50 m than in either of the other distance categories. There was significant variation in numbers of individuals and species among point counts. For both, significant differences between region and patch (nested within region) occurred; neither habitat nor interaction between habitat and region was significant. For = 0.05 and L3 = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total birds (MSE = 9.28) and species (MSE = 3.79), respectively; 25 percent of the mean could be achieved with 5 counts per factor level. Corresponding sample sizes required to detect differences of rarer species (e.g., Wood Thrush) were 500; for common species (e.g., Northern Cardinal) this same level of precision could be achieved with 100 counts.
Teasing Apart Complex Motions using VideoPoint
NASA Astrophysics Data System (ADS)
Fischer, Mark
2002-10-01
Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.
Continuum Limit of Total Variation on Point Clouds
NASA Astrophysics Data System (ADS)
García Trillos, Nicolás; Slepčev, Dejan
2016-04-01
We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when the cut capacity, and more generally total variation, on these graphs is a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as the number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking of the limit is enabled by a transportation based metric which allows us to suitably compare functionals defined on different point clouds.
NASA Astrophysics Data System (ADS)
Sgambitterra, Emanuele; Piccininni, Antonio; Guglielmi, Pasquale; Ambrogio, Giuseppina; Fragomeni, Gionata; Villa, Tomaso; Palumbo, Gianfranco
2018-05-01
Cranial implants are custom prostheses characterized by quite high geometrical complexity and small thickness; at the same time aesthetic and mechanical requirements have to be met. Titanium alloys are largely adopted for such prostheses, as they can be processed via different manufacturing technologies. In the present work cranial prostheses have been manufactured by Super Plastic Forming (SPF) and Single Point Incremental Forming (SPIF). In order to assess the mechanical performance of the cranial prostheses, drop tests under different load conditions were conducted on flat samples to investigate the effect of the blank thickness. Numerical simulations were also run for comparison purposes. The mechanical performance of the cranial implants manufactured by SPF and SPIF could be predicted using drop test data and information about the thickness evolution of the formed parts: the SPIFed prosthesis revealed to have a lower maximum deflection and a higher maximum force, while the SPFed prostheses showed a lower absorbed energy.
Hartmann, Georg; Schuster, Michael
2013-01-25
The determination of metallic nanoparticles in environmental samples requires sample pretreatment that ideally combines pre-concentration and species selectivity. With cloud point extraction (CPE) using the surfactant Triton X-114 we present a simple and cost effective separation technique that meets both criteria. Effective separation of ionic gold species and Au nanoparticles (Au-NPs) is achieved by using sodium thiosulphate as a complexing agent. The extraction efficiency for Au-NP ranged from 1.01 ± 0.06 (particle size 2 nm) to 0.52 ± 0.16 (particle size 150 nm). An enrichment factor of 80 and a low limit of detection of 5 ng L(-1) is achieved using electrothermal atomic absorption spectrometry (ET-AAS) for quantification. TEM measurements showed that the particle size is not affected by the CPE process. Natural organic matter (NOM) is tolerated up to a concentration of 10 mg L(-1). The precision of the method expressed as the standard deviation of 12 replicates at an Au-NP concentration of 100 ng L(-1) is 9.5%. A relation between particle concentration and the extraction efficiency was not observed. Spiking experiments showed a recovery higher than 91% for environmental water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Soil moisture optimal sampling strategy for Sentinel 1 validation super-sites in Poland
NASA Astrophysics Data System (ADS)
Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Lipiec, Jerzy; Usowicz, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan
2014-05-01
Soil moisture (SM) exhibits a high temporal and spatial variability that is dependent not only on the rainfall distribution, but also on the topography of the area, physical properties of soil and vegetation characteristics. Large variability does not allow on certain estimation of SM in the surface layer based on ground point measurements, especially in large spatial scales. Remote sensing measurements allow estimating the spatial distribution of SM in the surface layer on the Earth, better than point measurements, however they require validation. This study attempts to characterize the SM distribution by determining its spatial variability in relation to the number and location of ground point measurements. The strategy takes into account the gravimetric and TDR measurements with different sampling steps, abundance and distribution of measuring points on scales of arable field, wetland and commune (areas: 0.01, 1 and 140 km2 respectively), taking into account the different status of SM. Mean values of SM were lowly sensitive on changes in the number and arrangement of sampling, however parameters describing the dispersion responded in a more significant manner. Spatial analysis showed autocorrelations of the SM, which lengths depended on the number and the distribution of points within the adopted grids. Directional analysis revealed a differentiated anisotropy of SM for different grids and numbers of measuring points. It can therefore be concluded that both the number of samples, as well as their layout on the experimental area, were reflected in the parameters characterizing the SM distribution. This suggests the need of using at least two variants of sampling, differing in the number and positioning of the measurement points, wherein the number of them must be at least 20. This is due to the value of the standard error and range of spatial variability, which show little change with the increase in the number of samples above this figure. Gravimetric method gives a more varied distribution of SM than those derived from TDR measurements. It should be noted that reducing the number of samples in the measuring grid leads to flattening the distribution of SM from both methods and increasing the estimation error at the same time. Grid of sensors for permanent measurement points should include points that have similar distributions of SM in the vicinity. Results of the analysis including number, the maximum correlation ranges and the acceptable estimation error should be taken into account when choosing of the measurement points. Adoption or possible adjustment of the distribution of the measurement points should be verified by performing additional measuring campaigns during the dry and wet periods. Presented approach seems to be appropriate for creation of regional-scale test (super) sites, to validate products of satellites equipped with SAR (Synthetic Aperture Radar), operating in C-band, with spatial resolution suited to single field scale, as for example: ERS-1, ERS-2, Radarsat and Sentinel-1, which is going to be launched in next few months. The work was partially funded by the Government of Poland through an ESA Contract under the PECS ELBARA_PD project No. 4000107897/13/NL/KML.
Correction for slope in point and transect relascope sampling of downed coarse woody debris
Goran Stahl; Anna Ringvall; Jeffrey H. Gove; Mark J. Ducey
2002-01-01
In this article, the effect of sloping terrain on estimates in point and transect relascope sampling (PRS and TRS, respectively) is studied. With these inventory methods, a wide angle relascope is used either from sample points (PRS) or along survey lines (TRS). Characteristics associated with line-shaped objects on the ground are assessed, e.g., the length or volume...
Automated Approach to Very High-Order Aeroacoustic Computations. Revision
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2001-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
NASA Astrophysics Data System (ADS)
Saarinen, N.; Vastaranta, M.; Näsi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M. A.; Luoma, V.; Tommaselli, A. M. G.; Imai, N. N.; Ribeiro, E. A. W.; Guimarães, R. B.; Holopainen, M.; Hyyppä, J.
2017-10-01
Biodiversity is commonly referred to as species diversity but in forest ecosystems variability in structural and functional characteristics can also be treated as measures of biodiversity. Small unmanned aerial vehicles (UAVs) provide a means for characterizing forest ecosystem with high spatial resolution, permitting measuring physical characteristics of a forest ecosystem from a viewpoint of biodiversity. The objective of this study is to examine the applicability of photogrammetric point clouds and hyperspectral imaging acquired with a small UAV helicopter in mapping biodiversity indicators, such as structural complexity as well as the amount of deciduous and dead trees at plot level in southern boreal forests. Standard deviation of tree heights within a sample plot, used as a proxy for structural complexity, was the most accurately derived biodiversity indicator resulting in a mean error of 0.5 m, with a standard deviation of 0.9 m. The volume predictions for deciduous and dead trees were underestimated by 32.4 m3/ha and 1.7 m3/ha, respectively, with standard deviation of 50.2 m3/ha for deciduous and 3.2 m3/ha for dead trees. The spectral features describing brightness (i.e. higher reflectance values) were prevailing in feature selection but several wavelengths were represented. Thus, it can be concluded that structural complexity can be predicted reliably but at the same time can be expected to be underestimated with photogrammetric point clouds obtained with a small UAV. Additionally, plot-level volume of dead trees can be predicted with small mean error whereas identifying deciduous species was more challenging at plot level.
Detectability of Forest Birds from Stationary Points in Northern Wisconsin
Amy T. Wolf; Robert W. Howe; Gregory J. Davis
1995-01-01
Estimation of avian densities from point counts requires information about the distance at which birds can be detected by the observer. Detection distances also are important for designing the spacing of point counts in a regional sampling scheme. We examined the relationship between distance and detectability for forest songbirds in northern Wisconsin. Like previous...
Kristine L. Pilgrim; William J. Zielinski; Mary J. Mazurek; Frederick V. Schlexer; Michael K. Schwartz
2006-01-01
The Point Arena mountain beaver (Aplodontia rufa nigra) is an endangered subspecies. Efforts to recover this sub-species will be aided by advances in molecular genetics, specifically the ability to estimate population size using noninvasive genetic sampling. Here we report on the development of nine polymorphic loci for the Point Arena mountain...
Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently
2013-01-01
Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.
A complex fermionic tensor model in d dimensions
NASA Astrophysics Data System (ADS)
Prakash, Shiroman; Sinha, Ritam
2018-02-01
In this note, we study a melonic tensor model in d dimensions based on three-index Dirac fermions with a four-fermion interaction. Summing the melonic diagrams at strong coupling allows one to define a formal large- N saddle point in arbitrary d and calculate the spectrum of scalar bilinear singlet operators. For d = 2 - ɛ the theory is an infrared fixed point, which we find has a purely real spectrum that we determine numerically for arbitrary d < 2, and analytically as a power series in ɛ. The theory appears to be weakly interacting when ɛ is small, suggesting that fermionic tensor models in 1-dimension can be studied in an ɛ expansion. For d > 2, the spectrum can still be calculated using the saddle point equations, which may define a formal large- N ultraviolet fixed point analogous to the Gross-Neveu model in d > 2. For 2 < d < 6, we find that the spectrum contains at least one complex scalar eigenvalue (similar to the complex eigenvalue present in the bosonic tensor model recently studied by Giombi, Klebanov and Tarnopolsky) which indicates that the theory is unstable. We also find that the fixed point is weakly-interacting when d = 6 (or more generally d = 4 n + 2) and has a real spectrum for 6 < d < 6 .14 which we present as a power series in ɛ in 6 + ɛ dimensions.
Methyl-CpG island-associated genome signature tags
Dunn, John J
2014-05-20
Disclosed is a method for analyzing the organismic complexity of a sample through analysis of the nucleic acid in the sample. In the disclosed method, through a series of steps, including digestion with a type II restriction enzyme, ligation of capture adapters and linkers and digestion with a type IIS restriction enzyme, genome signature tags are produced. The sequences of a statistically significant number of the signature tags are determined and the sequences are used to identify and quantify the organisms in the sample. Various embodiments of the invention described herein include methods for using single point genome signature tags to analyze the related families present in a sample, methods for analyzing sequences associated with hyper- and hypo-methylated CpG islands, methods for visualizing organismic complexity change in a sampling location over time and methods for generating the genome signature tag profile of a sample of fragmented DNA.
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
Cook, David A; Sorensen, Kristi J; Wilkinson, John M; Berger, Richard A
2013-11-25
Answering clinical questions affects patient-care decisions and is important to continuous professional development. The process of point-of-care learning is incompletely understood. To understand what barriers and enabling factors influence physician point-of-care learning and what decisions physicians face during this process. Focus groups with grounded theory analysis. Focus group discussions were transcribed and then analyzed using a constant comparative approach to identify barriers, enabling factors, and key decisions related to physician information-seeking activities. Academic medical center and outlying community sites. Purposive sample of 50 primary care and subspecialist internal medicine and family medicine physicians, interviewed in 11 focus groups. Insufficient time was the main barrier to point-of-care learning. Other barriers included the patient comorbidities and contexts, the volume of available information, not knowing which resource to search, doubt that the search would yield an answer, difficulty remembering questions for later study, and inconvenient access to computers. Key decisions were whether to search (reasons to search included infrequently seen conditions, practice updates, complex questions, and patient education), when to search (before, during, or after the clinical encounter), where to search (with the patient present or in a separate room), what type of resource to use (colleague or computer), what specific resource to use (influenced first by efficiency and second by credibility), and when to stop. Participants noted that key features of efficiency (completeness, brevity, and searchability) are often in conflict. Physicians perceive that insufficient time is the greatest barrier to point-of-care learning, and efficiency is the most important determinant in selecting an information source. Designing knowledge resources and systems to target key decisions may improve learning and patient care.
Buelow, Janice M; Johnson, Cynthia S; Perkins, Susan M; Austin, Joan K; Dunn, David W
2013-04-01
Caregivers of children with both epilepsy and learning problems need assistance to manage their child's complex medical and mental health problems. We tested the cognitive behavioral intervention "Creating Avenues for Parent Partnership" (CAPP) which was designed to help caregivers develop knowledge as well as the confidence and skills to manage their child's condition. The CAPP intervention consisted of a one-day cognitive behavioral program and three follow-up group sessions. The sample comprised 31 primary caregivers. Caregivers reported that the program was useful (mean = 3.66 on a 4-point scale), acceptable (mean = 4.28 on a 5-point scale), and "pretty easy" (mean = 1.97 on a 4-point scale). Effect sizes were small to medium in paired t tests (comparison of intervention to control) and paired analysis of key variables in the pre- and post-tests. The CAPP program shows promise in helping caregivers build skills to manage their child's condition. Copyright © 2013 Elsevier Inc. All rights reserved.
Beekman, Christopher R.; Matta, Murali K.; Thomas, Christopher D.; Mohammad, Adil; Stewart, Sharron; Xu, Lin; Chockalingam, Ashok; Shea, Katherine; Sun, Dajun; Jiang, Wenlei; Patel, Vikram; Rouse, Rodney
2017-01-01
Relative biodistribution of FDA-approved innovator and generic sodium ferric gluconate (SFG) drug products was investigated to identify differences in tissue distribution of iron after intravenous dosing to rats. Three equal cohorts of 42 male Sprague-Dawley rats were created with each cohort receiving one of three treatments: (1) the innovator SFG product dosed intravenously at a concentration of 40 mg/kg; (2) the generic SFG product dosed intravenously at a concentration of 40 mg/kg; (3) saline dosed intravenously at equivalent volume to SFG products. Sampling time points were 15 min, 1 h, 8 h, 1 week, two weeks, four weeks, and six weeks post-treatment. Six rats from each group were sacrificed at each time point. Serum, femoral bone marrow, lungs, brain, heart, kidneys, liver, and spleen were harvested and evaluated for total iron concentration by ICP-MS. The ICP-MS analytical method was validated with linearity, range, accuracy, and precision. Results were determined for mean iron concentrations (µg/g) and mean total iron (whole tissue) content (µg/tissue) for each tissue of all groups at each time point. A percent of total distribution to each tissue was calculated for both products. At any given time point, the overall percent iron concentration distribution did not vary between the two SFG drugs by more than 7% in any tissue. Overall, this study demonstrated similar tissue biodistribution for the two SFG products in the examined tissues. PMID:29283393
Performance Evaluation of sUAS Equipped with Velodyne HDL-32E LiDAR Sensor
NASA Astrophysics Data System (ADS)
Jozkow, G.; Wieczorek, P.; Karpina, M.; Walicka, A.; Borkowski, A.
2017-08-01
The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.
Research study on stabilization and control: Modern sampled data control theory
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.; Yackel, R. A.
1973-01-01
A numerical analysis of spacecraft stability parameters was conducted. The analysis is based on a digital approximation by point by point state comparison. The technique used is that of approximating a continuous data system by a sampled data model by comparison of the states of the two systems. Application of the method to the digital redesign of the simplified one axis dynamics of the Skylab is presented.
Comparison of VFA titration procedures used for monitoring the biogas process.
Lützhøft, Hans-Christian Holten; Boe, Kanokwan; Fang, Cheng; Angelidaki, Irini
2014-05-01
Titrimetric determination of volatile fatty acids (VFAs) contents is a common way to monitor a biogas process. However, digested manure from co-digestion biogas plants has a complex matrix with high concentrations of interfering components, resulting in varying results when using different titration procedures. Currently, no standardized procedure is used and it is therefore difficult to compare the performance among plants. The aim of this study was to evaluate four titration procedures (for determination of VFA-levels of digested manure samples) and compare results with gas chromatographic (GC) analysis. Two of the procedures are commonly used in biogas plants and two are discussed in literature. The results showed that the optimal titration results were obtained when 40 mL of four times diluted digested manure was gently stirred (200 rpm). Results from samples with different VFA concentrations (1-11 g/L) showed linear correlation between titration results and GC measurements. However, determination of VFA by titration generally overestimated the VFA contents compared with GC measurements when samples had low VFA concentrations, i.e. around 1 g/L. The accuracy of titration increased when samples had high VFA concentrations, i.e. around 5 g/L. It was further found that the studied ionisable interfering components had lowest effect on titration when the sample had high VFA concentration. In contrast, bicarbonate, phosphate and lactate had significant effect on titration accuracy at low VFA concentration. An extended 5-point titration procedure with pH correction was best to handle interferences from bicarbonate, phosphate and lactate at low VFA concentrations. Contrary, the simplest titration procedure with only two pH end-points showed the highest accuracy among all titration procedures at high VFA concentrations. All in all, if the composition of the digested manure sample is not known, the procedure with only two pH end-points should be the procedure of choice, due to its simplicity and accuracy. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Topology of Three-Dimensional Symmetric Tensor Fields
NASA Technical Reports Server (NTRS)
Lavin, Yingmei; Levy, Yuval; Hesselink, Lambertus
1994-01-01
We study the topology of 3-D symmetric tensor fields. The goal is to represent their complex structure by a simple set of carefully chosen points and lines analogous to vector field topology. The basic constituents of tensor topology are the degenerate points, or points where eigenvalues are equal to each other. First, we introduce a new method for locating 3-D degenerate points. We then extract the topological skeletons of the eigenvector fields and use them for a compact, comprehensive description of the tensor field. Finally, we demonstrate the use of tensor field topology for the interpretation of the two-force Boussinesq problem.
Martínez-Sánchez, Jose M; Fu, Marcela; Ariza, Carles; López, María J; Saltó, Esteve; Pascual, José A; Schiaffino, Anna; Borràs, Josep M; Peris, Mercè; Agudo, Antonio; Nebot, Manel; Fernández, Esteve
2009-01-01
To assess the optimal cut-point for salivary cotinine concentration to identify smoking status in the adult population of Barcelona. We performed a cross-sectional study of a representative sample (n=1,117) of the adult population (>16 years) in Barcelona (2004-2005). This study gathered information on active and passive smoking by means of a questionnaire and a saliva sample for cotinine determination. We analyzed sensitivity and specificity according to sex, age, smoking status (daily and occasional), and exposure to second-hand smoke at home. ROC curves and the area under the curve were calculated. The prevalence of smokers (daily and occasional) was 27.8% (95% CI: 25.2-30.4%). The optimal cut-point to discriminate smoking status was 9.2 ng/ml (sensitivity=88.7% and specificity=89.0%). The area under the ROC curve was 0.952. The optimal cut-point was 12.2 ng/ml in men and 7.6 ng/ml in women. The optimal cut-point was higher at ages with a greater prevalence of smoking. Daily smokers had a higher cut-point than occasional smokers. The optimal cut-point to discriminate smoking status in the adult population is 9.2 ng/ml, with sensitivities and specificities around 90%. The cut-point was higher in men and in younger people. The cut-point increases with higher prevalence of daily smokers.
Convex Hull Aided Registration Method (CHARM).
Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian
2017-09-01
Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.
Active Control of Acoustic Field-of-View in a Biosonar System
Yovel, Yossi; Falk, Ben; Moss, Cynthia F.; Ulanovsky, Nachum
2011-01-01
Active-sensing systems abound in nature, but little is known about systematic strategies that are used by these systems to scan the environment. Here, we addressed this question by studying echolocating bats, animals that have the ability to point their biosonar beam to a confined region of space. We trained Egyptian fruit bats to land on a target, under conditions of varying levels of environmental complexity, and measured their echolocation and flight behavior. The bats modulated the intensity of their biosonar emissions, and the spatial region they sampled, in a task-dependant manner. We report here that Egyptian fruit bats selectively change the emission intensity and the angle between the beam axes of sequentially emitted clicks, according to the distance to the target, and depending on the level of environmental complexity. In so doing, they effectively adjusted the spatial sector sampled by a pair of clicks—the “field-of-view.” We suggest that the exact point within the beam that is directed towards an object (e.g., the beam's peak, maximal slope, etc.) is influenced by three competing task demands: detection, localization, and angular scanning—where the third factor is modulated by field-of-view. Our results suggest that lingual echolocation (based on tongue clicks) is in fact much more sophisticated than previously believed. They also reveal a new parameter under active control in animal sonar—the angle between consecutive beams. Our findings suggest that acoustic scanning of space by mammals is highly flexible and modulated much more selectively than previously recognized. PMID:21931535
Page, Norman J; Talkington, Raymond W.
1984-01-01
Samples of spinel lherzolite, harzburgite, dunite, and chromitite from the Bay of Islands, Lewis Hills, Table Mountain, Advocate, North Arm Mountain, White Hills Periodite Point Rousse, Great Bend and Betts Cove ophiolite complexes in Newfoundland were analyzed for the platinum-group elements (PGE) Pd, Pt, Rh, Ru and Ir. The ranges of concentration (in ppb) observed for all rocks are: less than 0. 5 to 77 (Pd), less than 1 to 120 (Pt), less than 0. 5 to 20 (Rh), less than 100 to 250 (Ru) and less than 20 to 83 (Ir). Chondrite-normalized PGE ratios suggest differences between rock types and between complexes. Samples of chromitite and dunite show relative enrichment in Ru and Ir and relative depletion in Pt and Pd.
Fessenden, S W; Hackmann, T J; Ross, D A; Foskolos, A; Van Amburgh, M E
2017-09-01
Microbial samples from 4 independent experiments in lactating dairy cattle were obtained and analyzed for nutrient composition, AA digestibility, and AA profile after multiple hydrolysis times ranging from 2 to 168 h. Similar bacterial and protozoal isolation techniques were used for all isolations. Omasal bacteria and protozoa samples were analyzed for AA digestibility using a new in vitro technique. Multiple time point hydrolysis and least squares nonlinear regression were used to determine the AA content of omasal bacteria and protozoa, and equivalency comparisons were made against single time point hydrolysis. Formalin was used in 1 experiment, which negatively affected AA digestibility and likely limited the complete release of AA during acid hydrolysis. The mean AA digestibility was 87.8 and 81.6% for non-formalin-treated bacteria and protozoa, respectively. Preservation of microbe samples in formalin likely decreased recovery of several individual AA. Results from the multiple time point hydrolysis indicated that Ile, Val, and Met hydrolyzed at a slower rate compared with other essential AA. Singe time point hydrolysis was found to be nonequivalent to multiple time point hydrolysis when considering biologically important changes in estimated microbial AA profiles. Several AA, including Met, Ile, and Val, were underpredicted using AA determination after a single 24-h hydrolysis. Models for predicting postruminal supply of AA might need to consider potential bias present in postruminal AA flow literature when AA determinations are performed after single time point hydrolysis and when using formalin as a preservative for microbial samples. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
A Secret 3D Model Sharing Scheme with Reversible Data Hiding Based on Space Subdivision
NASA Astrophysics Data System (ADS)
Tsai, Yuan-Yu
2016-03-01
Secret sharing is a highly relevant research field, and its application to 2D images has been thoroughly studied. However, secret sharing schemes have not kept pace with the advances of 3D models. With the rapid development of 3D multimedia techniques, extending the application of secret sharing schemes to 3D models has become necessary. In this study, an innovative secret 3D model sharing scheme for point geometries based on space subdivision is proposed. Each point in the secret point geometry is first encoded into a series of integer values that fall within [0, p - 1], where p is a predefined prime number. The share values are derived by substituting the specified integer values for all coefficients of the sharing polynomial. The surface reconstruction and the sampling concepts are then integrated to derive a cover model with sufficient model complexity for each participant. Finally, each participant has a separate 3D stego model with embedded share values. Experimental results show that the proposed technique supports reversible data hiding and the share values have higher levels of privacy and improved robustness. This technique is simple and has proven to be a feasible secret 3D model sharing scheme.
An increase of intelligence measured by the WPPSI in China, 1984–2006
Liu, Jianghong; Yang, Hua; Li, Linda; Chen, Tunong; Lynn, Richard
2017-01-01
Normative data for 5–6 year olds on the Chinese Preschool and Primary Scale of Intelligence (WPPSI) are reported for samples tested in 1984 and 2006. There was a significant increase in Full Scale IQ of 4.53 points over the 22 year period, representing a gain of 2.06 IQ points per decade. There were also significant increases in Verbal IQ of 4.27 points and in Performance IQ of 4.08 points. PMID:29416189
Research on optimal DEM cell size for 3D visualization of loess terraces
NASA Astrophysics Data System (ADS)
Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei
2009-10-01
In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.
METHOD AND MEANS FOR RECOGNIZING COMPLEX PATTERNS
Hough, P.V.C.
1962-12-18
This patent relates to a method and means for recognizing a complex pattern in a picture. The picture is divided into framelets, each framelet being sized so that any segment of the complex pattern therewithin is essentially a straight line. Each framelet is scanned to produce an electrical pulse for each point scanned on the segment therewithin. Each of the electrical pulses of each segment is then transformed into a separate strnight line to form a plane transform in a pictorial display. Each line in the plane transform of a segment is positioned laterally so that a point on the line midway between the top and the bottom of the pictorial display occurs at a distance from the left edge of the pictorial display equal to the distance of the generating point in the segment from the left edge of the framelet. Each line in the plane transform of a segment is inclined in the pictorial display at an angle to the vertical whose tangent is proportional to the vertical displacement of the generating point in the segment from the center of the framelet. The coordinate position of the point of intersection of the lines in the pictorial display for each segment is determined and recorded. The sum total of said recorded coordinate positions being representative of the complex pattern. (AEC)
Study on the stability of adrenaline and on the determination of its acidity constants
NASA Astrophysics Data System (ADS)
Corona-Avendaño, S.; Alarcón-Angeles, G.; Rojas-Hernández, A.; Romero-Romo, M. A.; Ramírez-Silva, M. T.
2005-01-01
In this work, the results are presented concerning the influence of time on the spectral behaviour of adrenaline (C 9H 13NO 3) (AD) and of the determination of its acidity constants by means of spectrophotometry titrations and point-by-point analysis, using for the latter freshly prepared samples for each analysis at every single pH. As the catecholamines are sensitive to light, all samples were protected against it during the course of the experiments. Each method rendered four acidity constants corresponding each to the four acid protons belonging to the functional groups present in the molecule; for the point-by-point analysis the values found were: log β 1=38.25±0.21 , log β 2=29.65±0.17 , log β 3=21.01±0.14 , log β 4=11.34±0.071 .
Zhang, Xiao-Tai; Wang, Shu; Xing, Guo-Wen
2017-02-01
Ginsenoside is a large family of triterpenoid saponins from Panax ginseng, which possesses various important biological functions. Due to the very similar structures of these complex glycoconjugates, it is crucial to develop a powerful analytic method to identify ginsenosides qualitatively or quantitatively. We herein report an eight-channel fluorescent sensor array as artificial tongue to achieve the discriminative sensing of ginsenosides. The fluorescent cross-responsive array was constructed by four boronlectins bearing flexible boronic acid moieties (FBAs) with multiple reactive sites and two linear poly(phenylene-ethynylene) (PPEs). An "on-off-on" response pattern was afforded on the basis of superquenching of fluorescent indicator PPEs and an analyte-induced allosteric indicator displacement (AID) process. Most importantly, it was found that the canonical distribution of ginsenoside data points analyzed by linear discriminant analysis (LDA) was highly correlated with the inherent molecular structures of the analytes, and the absence of overlaps among the five point groups reflected the effectiveness of the sensor array in the discrimination process. Almost all of the unknown ginsenoside samples at different concentrations were correctly identified on the basis of the established mathematical model. Our current work provided a general and constructive method to improve the quality assessment and control of ginseng and its extracts, which are useful and helpful for further discriminating other complex glycoconjugate families.
Line segment extraction for large scale unorganized point clouds
NASA Astrophysics Data System (ADS)
Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan
2015-04-01
Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.
Statistical Aspects of Point Count Sampling
Richard J. Barker; John R. Sauer
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demonstrate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the...
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
Stable isotopes of water in estimation of groundwater dependence in peatlands
NASA Astrophysics Data System (ADS)
Isokangas, Elina; Rossi, Pekka; Ronkanen, Anna-Kaisa; Marttila, Hannu; Rozanski, Kazimierz; Kløve, Bjørn
2016-04-01
Peatland hydrology and ecology can be irreversibly affected by anthropogenic actions or climate change. Especially sensitive are groundwater dependent areas which are difficult to determine. Environmental tracers such as stable isotopes of water are efficient tools to identify these dependent areas and study water flow patterns in peatlands. In this study the groundwater dependence of a Finnish peatland complex situated next to an esker aquifer was studied. Groundwater seepage areas in the peatland were localized by thermal imaging and the subsoil structure was determined using ground penetrating radar. Water samples were collected for stable isotopes of water (δ18O and δ2H), temperature, pH and electrical conductivity at 133 locations of the studied peatland (depth of 10 cm) at approximately 100 m intervals during 4 August - 11 August 2014. In addition, 10 vertical profiles were sampled (10, 30, 60 and 90 cm depth) for the same parameters and for hydraulic conductivity. The cavity ring-down spectroscopy (CRDS) was applied to measure δ18O and δ2H values. The local meteoric water line was determined using precipitation samples from Nuoritta station located 17 km west of the study area and the local evaporation line was defined using water samples from lake Sarvilampi situated on the studied peatland complex. Both near-surface spatial survey and depth profiles of peatland water revealed very wide range in stable isotope composition, from approximately -13.0 to -6.0 ‰ for δ18O and from -94 to -49 ‰ for δ2H, pointing to spatially varying influence of groundwater input from near-by esker aquifer. In addition, position of the data points with respect to the local meteoric water line showed spatially varying degree of evaporation of peatland water. Stable isotope signatures of peatland water in combination with thermal images delineated the specific groundwater dependent areas. By combining the information gained from different types of observations, the conceptual hydrological model of the studied peatland complex, including groundwater - surface water interaction, was built in a new, innovative way.
A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer
NASA Astrophysics Data System (ADS)
Zheng, G.; Cheng, Y.; He, K.; Duan, F.; Ma, Y.
2014-01-01
Sunset Semi-Continuous Carbon Analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, here we identified a new type of SCCA calculation discrepancy caused by the default multi-point baseline correction method. When exceeding a certain threshold carbon load, multi-point correction could cause significant Total Carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples with three temperature protocols. For ambient samples, 22%, 36% and 12% TC was underestimated by the three protocols, respectively, with corresponding threshold being ~0, 20 and 25 μg C. For sucrose, however, such discrepancy was observed with only one of these protocols, indicating the need of more refractory SCCA calibration substance. The discrepancy was less significant for the NIOSH (National Institute for Occupational Safety and Health)-like protocol compared with the other two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments). Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. Proposed correction method was to use multi-point corrected data when below the determined threshold, while use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.
NASA Astrophysics Data System (ADS)
Bullen, T. D.; Bailey, S. W.; McGuire, K. J.; Brousseau, P.; Ross, D. S.; Bourgault, R.; Zimmer, M. A.
2010-12-01
Understanding the origin of metals in watersheds, as well as the transport and cycling processes that affect them is of critical importance to watershed science. Metals can be derived both from weathering of minerals in the watershed soils and bedrock and from atmospheric deposition, and can have highly variable residence times in the watershed due to cycling through plant communities and retention in secondary mineral phases prior to release to drainage waters. Although much has been learned about metal cycling and transport through watersheds using simple “box model” approaches that define unique input, output and processing terms, the fact remains that watersheds are inherently complex and variable in terms of substrate structure, hydrologic flowpaths and the influence of plants, all of which affect the chemical composition of water that ultimately passes through the watershed outlet. In an effort to unravel some of this complexity at a watershed scale, we have initiated an interdisciplinary, hydropedology-focused study of the hydrologic reference watershed (Watershed 3) at the Hubbard Brook Experimental Forest in New Hampshire, USA. This 41 hectare headwater catchment consists of a beech-birch-maple-spruce forest growing on soils developed on granitoid glacial till that mantles Paleozoic metamorphic bedrock. Soils vary from lateral spodosols downslope from bedrock exposures near the watershed crest to vertical and bi-modal spodosols along hillslopes to umbrepts at toe-slope positions and inferred hydrologic pinch points created by bedrock and till structure. Using a variety of chemical and isotope tracers (e.g., K/Na, Ca/Sr, Sr/Ba, Fe/Mn, 87Sr/86Sr, Ca-Sr-Fe stable isotopes) on water, soil and plant samples in an end-member mixing analysis approach, we are attempting to discretize the watershed according to soil types encountered along determined hydrologic flowpaths in order better constrain the various biogeochemical processes that control the delivery of metals to the watershed outlet. Our initial results reveal that along the numerous first-order streams that drain the watershed, chemical and Sr isotope compositions are highly variable from sample point to sample point on a given day and from season to season, reflecting the complex nature of hydrologic flowpaths that deliver water to the streams and hinting at the importance of groundwater seeps that appear to concentrate along the central axis of the watershed.
Development of spatial scaling technique of forest health sample point information
NASA Astrophysics Data System (ADS)
Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.
2017-12-01
Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.
Slow updating of the achromatic point after a change in illumination
Lee, R. J.; Dawson, K. A.; Smithson, H. E.
2015-01-01
For a colour constant observer, the colour appearance of a surface is independent of the spectral composition of the light illuminating it. We ask how rapidly colour appearance judgements are updated following a change in illumination. We obtained repeated binary colour classifications for a set of stimuli defined by their reflectance functions and rendered under either sunlight or skylight. We used these classifications to derive boundaries in colour space that identify the observer’s achromatic point. In steady-state conditions of illumination, the achromatic point lay close to the illuminant chromaticity. In our experiment the illuminant changed abruptly every 21 seconds (at the onset of every 10th trial), allowing us to track changes in the achromatic point that were caused by the cycle of illuminant changes. In one condition, the test reflectance was embedded in a spatial pattern of reflectance samples under consistent illumination. The achromatic point migrated across colour space between the chromaticities of the steady-state achromatic points. This update took several trials rather than being immediate. To identify the factors that governed perceptual updating of appearance judgements we used two further conditions, one in which the test reflectance was presented in isolation and one in which the surrounding reflectances were rendered under an inconsistent and unchanging illumination. Achromatic settings were not well predicted by the information available from scenes at a single time-point. Instead the achromatic points showed a strong dependence on the history of chromatic samples. The strength of this dependence differed between observers and was modulated by the spatial context. PMID:22275468
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
Contrarian behavior in a complex adaptive system
NASA Astrophysics Data System (ADS)
Liang, Y.; An, K. N.; Yang, G.; Huang, J. P.
2013-01-01
Contrarian behavior is a kind of self-organization in complex adaptive systems (CASs). Here we report the existence of a transition point in a model resource-allocation CAS with contrarian behavior by using human experiments, computer simulations, and theoretical analysis. The resource ratio and system predictability serve as the tuning parameter and order parameter, respectively. The transition point helps to reveal the positive or negative role of contrarian behavior. This finding is in contrast to the common belief that contrarian behavior always has a positive role in resource allocation, say, stabilizing resource allocation by shrinking the redundancy or the lack of resources. It is further shown that resource allocation can be optimized at the transition point by adding an appropriate size of contrarians. This work is also expected to be of value to some other fields ranging from management and social science to ecology and evolution.
Hierarchical Probabilistic Inference of Cosmic Shear
NASA Astrophysics Data System (ADS)
Schneider, Michael D.; Hogg, David W.; Marshall, Philip J.; Dawson, William A.; Meyers, Joshua; Bard, Deborah J.; Lang, Dustin
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.
Thermal effect on structural and magnetic properties of Fe78B13Si9 annealed amorphous ribbons
NASA Astrophysics Data System (ADS)
Soltani, Mohamed Larbi; Touares, Abdelhay; Aboki, Tiburce A. M.; Gasser, Jean-Georges
2017-08-01
In the present work, we study the influence of thermal treatments on the magnetic properties of as-quenched and pre-crystallized Fe78Si9B13 after stress relaxation. The crystallization behavior of amorphous and treated Fe78Si9B13 ribbons was revisited. The measurements were carried out by means of Differential Scanning Calorimetry, by X-ray diffraction and by Vibrating Sample Magnetometer, Susceptometer and fluxmeter. Relaxed samples were heated in the resistivity device up to 700°C and annealed near the onset temperature about 420°C for respectively 1, 3, 5, 8 hours. In as-quenched samples, two transition points occur at about 505°C and 564°C but in relaxed sample, the transition points have been found about 552°C and 568°C. Kinetics of crystallization was deduced for all studied samples. Annealing of the as-purchased ribbon shows the occurrence of α-Fe and tetragonal Fe3B resulting from the crystallization of the remaining amorphous phase. The effects on magnetic properties were pointed out by relating the structural evolution of the samples. The magnetic measurements show that annealing change the saturation magnetization and the coercive magnetic field values, hence destroying the good magnetic properties of the material. The heat treatment shows that the crystallization has greatly altered the shape of the cycles and moved the magnetic saturation point of the samples. The effect of treatment on the magneto-crystalline anisotropy is also demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M. G.; Abraham, K.; Ackermann, M.
Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retainingmore » efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.« less
Generation and characterization of point defects in SrTiO3 and Y3Al5O12
NASA Astrophysics Data System (ADS)
Selim, F. A.; Winarski, D.; Varney, C. R.; Tarun, M. C.; Ji, Jianfeng; McCluskey, M. D.
Positron annihilation lifetime spectroscopy (PALS) was applied to characterize point defects in single crystals of Y3Al5O12 and SrTiO3 after populating different types of defects by relevant thermal treatments. In SrTiO3, PALS measurements identified Sr vacancy, Ti vacancy, vacancy complexes of Ti-O (vacancy) and hydrogen complex defects. In Y3Al5O12 single crystals the measurements showed the presence of Al-vacancy, (Al-O) vacancy and Al-vacancy passivated by hydrogen. These defects are shown to play the major role in defining the electronic and optical properties of these complex oxides.
A time-efficient algorithm for implementing the Catmull-Clark subdivision method
NASA Astrophysics Data System (ADS)
Ioannou, G.; Savva, A.; Stylianou, V.
2015-10-01
Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.
40 CFR 432.21 - Special definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... extensive processing of the by-products of meat slaughtering. A complex slaughterhouse would usually include... STANDARDS MEAT AND POULTRY PRODUCTS POINT SOURCE CATEGORY Complex Slaughterhouses § 432.21 Special...
Method and apparatus for millimeter-wave detection of thermal waves for materials evaluation
Gopalsami, Nachappa; Raptis, Apostolos C.
1991-01-01
A method and apparatus for generating thermal waves in a sample and for measuring thermal inhomogeneities at subsurface levels using millimeter-wave radiometry. An intensity modulated heating source is oriented toward a narrow spot on the surface of a material sample and thermal radiation in a narrow volume of material around the spot is monitored using a millimeter-wave radiometer; the radiometer scans the sample point-by-point and a computer stores and displays in-phase and quadrature phase components of thermal radiations for each point on the scan. Alternatively, an intensity modulated heating source is oriented toward a relatively large surface area in a material sample and variations in thermal radiation within the full field of an antenna array are obtained using an aperture synthesis radiometer technique.
Wall shear stress fixed points in cardiovascular fluid mechanics.
Arzani, Amirhossein; Shadden, Shawn C
2018-05-17
Complex blood flow in large arteries creates rich wall shear stress (WSS) vectorial features. WSS acts as a link between blood flow dynamics and the biology of various cardiovascular diseases. WSS has been of great interest in a wide range of studies and has been the most popular measure to correlate blood flow to cardiovascular disease. Recent studies have emphasized different vectorial features of WSS. However, fixed points in the WSS vector field have not received much attention. A WSS fixed point is a point on the vessel wall where the WSS vector vanishes. In this article, WSS fixed points are classified and the aspects by which they could influence cardiovascular disease are reviewed. First, the connection between WSS fixed points and the flow topology away from the vessel wall is discussed. Second, the potential role of time-averaged WSS fixed points in biochemical mass transport is demonstrated using the recent concept of Lagrangian WSS structures. Finally, simple measures are proposed to quantify the exposure of the endothelial cells to WSS fixed points. Examples from various arterial flow applications are demonstrated. Copyright © 2018 Elsevier Ltd. All rights reserved.
Architecture of chaotic attractors for flows in the absence of any singular point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letellier, Christophe; Malasoma, Jean-Marc
2016-06-15
Some chaotic attractors produced by three-dimensional dynamical systems without any singular point have now been identified, but explaining how they are structured in the state space remains an open question. We here want to explain—in the particular case of the Wei system—such a structure, using one-dimensional sets obtained by vanishing two of the three derivatives of the flow. The neighborhoods of these sets are made of points which are characterized by the eigenvalues of a 2 × 2 matrix describing the stability of flow in a subspace transverse to it. We will show that the attractor is spiralling and twisted in themore » neighborhood of one-dimensional sets where points are characterized by a pair of complex conjugated eigenvalues. We then show that such one-dimensional sets are also useful in explaining the structure of attractors produced by systems with singular points, by considering the case of the Lorenz system.« less
Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model
NASA Astrophysics Data System (ADS)
Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man
2017-03-01
Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.
Influence of Landscape Morphology and Vegetation Cover on the Sampling of Mixed Igneous Bodies
NASA Astrophysics Data System (ADS)
Perugini, Diego; Petrelli, Maurizio; Poli, Giampiero
2010-05-01
A plethora of evidence indicates that magma mixing processes can take place at any evolutionary stage of magmatic systems and that they are extremely common in both plutonic and volcanic environments (e.g. Bateman, 1995). Furthermore, recent studies have shown that the magma mixing process is governed by chaotic dynamics whose evolution in space and time generates complex compositional patterns that can span several length scales producing fractal domains (e.g. Perugini et al., 2003). The fact that magma mixing processes can produce igneous bodies exhibiting a large compositional complexity brings up the key question about the potential pitfalls that may be associated with the sampling of these systems for petrological studies. In particular, since commonly only exiguous portions of the whole magmatic system are available as outcrops for sampling, it is important to address the point whether the sampling may be considered representative of the complexity of the magmatic system. We attempt to address this crucial point by performing numerical simulations of chaotic magma mixing processes in 3D. The numerical system used in the simulations is the so-called ABC (Arnold-Beltrami-Childress) flow (e.g. Galluccio and Vulpiani, 1994), which is able to generate the contemporaneous occurrence of chaotic and regular streamlines in which the mixing efficiency is differently modulated. This numerical system has already been successfully utilized as a kinematic template to reproduce magma mixing structures observed on natural outcrops (Perugini et al., 2007). The best conditions for sampling are evaluated considering different landscape morphologies and percentages of vegetation cover. In particular, synthetic landscapes with different degree of roughness are numerically reproduced using the Random Mid-point Displacement Method (RMDM; e.g. Fournier et al., 1982) in two dimensions and superimposed to the compositional fields generated by the magma mixing simulation. Vegetation cover is generated using a random Brownian motion process in 2D. Such an approach allows us to produce vegetation patches that closely match the general topology of natural vegetation (e.g., Mandelbrot, 1982). Results show that the goodness of sampling is strongly dependant on the roughness of the landscape, with highly irregular morphologies being the best candidates to give the most complete information on the whole magma body. Conversely, sampling on flat or nearly flat surfaces should be avoided because they may contain misleading information about the magmatic system. Contrary to common sense, vegetation cover does not appear to significantly influence the representativeness of sampling if sample collection occurs on topographically irregular outcrops. Application of the proposed method for sampling area selection is straightforward. The irregularity of natural landscapes and the percentage of vegetation can be estimated by using natural landscapes extracted from digital elevation models (DEM) of the Earth's surface and satellite images by employing a variety of methods (e.g., Develi and Babadagli, 1998), thus giving one the opportunity to select a priori the best outcrops for sampling. References Bateman R (1995) The interplay between crystallization, replenishment and hybridization in large felsic magma chambers. Earth Sci Rev 39: 91-106 Develi K, Babadagli T (1998) Quantfication of natural fracture surfaces using fractal geometry. Math Geol 30: 971-998 Fournier A, Fussel D, Carpenter L (1982) Computer rendering of stochastic models. Comm ACM 25: 371-384 Galluccio S, Vulpiani A (1994) Stretching of material lines and surfaces in systems with Lagrangian chaos. Physica A 212: 75-98 Mandelbrot BB (1982) The fractal geometry of nature. W. H. Freeman, San Francisco Perugini D, Petrelli M, Poli G (2007) A Virtual Voyage through 3D Structures Generated by Chaotic Mixing of Magmas and Numerical Simulations: a New Approach for Understanding Spatial and Temporal Complexity of Magma Dynamics, Visual Geosciences, 10.1007/s10069-006-0004-x Perugini D, Poli G, Mazzuoli R (2003) Chaotic advection, fractals and diffusion during mixing of magmas: evidences from lava flows. J Volcanol Geotherm Res 124: 255-279
Identifying the starting point of a spreading process in complex networks.
Comin, Cesar Henrique; Costa, Luciano da Fontoura
2011-11-01
When dealing with the dissemination of epidemics, one important question that can be asked is the location where the contamination began. In this paper, we analyze three spreading schemes and propose and validate an effective methodology for the identification of the source nodes. The method is based on the calculation of the centrality of the nodes on the sampled network, expressed here by degree, betweenness, closeness, and eigenvector centrality. We show that the source node tends to have the highest measurement values. The potential of the methodology is illustrated with respect to three theoretical complex network models as well as a real-world network, the email network of the University Rovira i Virgili.
Simulation of design-unbiased point-to-particle sampling compared to alternatives on plantation rows
Thomas B. Lynch; David Hamlin; Mark J. Ducey
2016-01-01
Total quantities of tree attributes can be estimated in plantations by sampling on plantation rows using several methods. At random sample points on a row, either fixed row lengths or variable row lengths with a fixed number of sample trees can be assessed. Ratio of means or mean of ratios estimators can be developed for the fixed number of trees option but are not...
2014-01-01
and proportional correctors. The weighting function evaluates nearby data samples to determine the utility of each correction style , eliminating the...sparse methods may be of use. As for other multi-fidelity techniques, true cokriging in the style described by geo-statisticians[93] is beyond the...sampling style between sampling points predicted to fall near the contour and sampling points predicted to be farther from the contour but with
Valenza, Gaetano; Citi, Luca; Barbieri, Riccardo
2013-01-01
We report an exemplary study of instantaneous assessment of cardiovascular dynamics performed using point-process nonlinear models based on Laguerre expansion of the linear and nonlinear Wiener-Volterra kernels. As quantifiers, instantaneous measures such as high order spectral features and Lyapunov exponents can be estimated from a quadratic and cubic autoregressive formulation of the model first order moment, respectively. Here, these measures are evaluated on heartbeat series coming from 16 healthy subjects and 14 patients with Congestive Hearth Failure (CHF). Data were gathered from the on-line repository PhysioBank, which has been taken as landmark for testing nonlinear indices. Results show that the proposed nonlinear Laguerre-Volterra point-process methods are able to track the nonlinear and complex cardiovascular dynamics, distinguishing significantly between CHF and healthy heartbeat series.
A one-way shooting algorithm for transition path sampling of asymmetric barriers
NASA Astrophysics Data System (ADS)
Brotzakis, Z. Faidon; Bolhuis, Peter G.
2016-10-01
We present a novel transition path sampling shooting algorithm for the efficient sampling of complex (biomolecular) activated processes with asymmetric free energy barriers. The method employs a fictitious potential that biases the shooting point toward the transition state. The method is similar in spirit to the aimless shooting technique by Peters and Trout [J. Chem. Phys. 125, 054108 (2006)], but is targeted for use with the one-way shooting approach, which has been shown to be more effective than two-way shooting algorithms in systems dominated by diffusive dynamics. We illustrate the method on a 2D Langevin toy model, the association of two peptides and the initial step in dissociation of a β-lactoglobulin dimer. In all cases we show a significant increase in efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhaskar,; Kumari, Neeti; Goyal, Neena, E-mail: neenacdri@yahoo.com
Highlights: Black-Right-Pointing-Pointer The study presents cloning and characterization of TCP1{gamma} gene from L. donovani. Black-Right-Pointing-Pointer TCP1{gamma} is a subunit of T-complex protein-1 (TCP1), a chaperonin class of protein. Black-Right-Pointing-Pointer LdTCP{gamma} exhibited differential expression in different stages of promastigotes. Black-Right-Pointing-Pointer LdTCP{gamma} co-localized with actin, a cytoskeleton protein. Black-Right-Pointing-Pointer The data suggests that this gene may have a role in differentiation/biogenesis. Black-Right-Pointing-Pointer First report on this chapronin in Leishmania. -- Abstract: T-complex protein-1 (TCP1) complex, a chaperonin class of protein, ubiquitous in all genera of life, is involved in intracellular assembly and folding of various proteins. The gamma subunit of TCP1 complexmore » (TCP1{gamma}), plays a pivotal role in the folding and assembly of cytoskeleton protein(s) as an individual or complexed with other subunits. Here, we report for the first time cloning, characterization and expression of the TCP1{gamma} of Leishmania donovani (LdTCP1{gamma}), the causative agent of Indian Kala-azar. Primary sequence analysis of LdTCP1{gamma} revealed the presence of all the characteristic features of TCP1{gamma}. However, leishmanial TCP1{gamma} represents a distinct kinetoplastid group, clustered in a separate branch of the phylogenic tree. LdTCP1{gamma} exhibited differential expression in different stages of promastigotes. The non-dividing stationary phase promastigotes exhibited 2.5-fold less expression of LdTCP1{gamma} as compared to rapidly dividing log phase parasites. The sub-cellular distribution of LdTCP1{gamma} was studied in log phase promastigotes by employing indirect immunofluorescence microscopy. The protein was present not only in cytoplasm but it was also localized in nucleus, peri-nuclear region, flagella, flagellar pocket and apical region. Co-localization of LdTCP1{gamma} with actin suggests that, this gene may have a role in maintaining the structural dynamics of cytoskeleton of parasite.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kudoyarova, V. Kh., E-mail: kudoyarova@mail.ioffe.ru; Tolmachev, V. A.; Gushchina, E. V.
2013-03-15
Rutherford backscattering, IR spectroscopy, ellipsometry, and atomic-force microscopy are used to perform an integrated study of the composition, structure and optical properties of a-Si{sub 1-x}C{sub x}:H Left-Pointing-Angle-Bracket Er Right-Pointing-Angle-Bracket amorphous films. The technique employed to obtain the a-Si{sub 1-x}C{sub x}:H Left-Pointing-Angle-Bracket Er Right-Pointing-Angle-Bracket amorphous films includes the high-frequency decomposition of a mixture of gases, (SiH{sub 4}){sub a} + (CH{sub 4}){sub b}, and the simultaneous thermal evaporation of a complex compound, Er(pd){sub 3}. It is demonstrated that raising the amount of CH{sub 4} in the gas mixture results in an increase in the carbon content of the films under study andmore » an increase in the optical gap E{sub g}{sup opt} from 1.75 to 2.2 eV. Changes in the composition of a-Si{sub 1-x}C{sub x}:H Left-Pointing-Angle-Bracket Er Right-Pointing-Angle-Bracket amorphous films, accompanied, in turn, by changes in the optical constants, are observed in the IR spectra. The ellipsometric spectra obtained are analyzed in terms of multiple-parameter models. The conclusion is made on the basis of this analysis that the experimental and calculated spectra coincide well when variation in the composition of the amorphous films with that of the gas mixture is taken into account. The existence of a thin (6-8 nm) silicon-oxide layer on the surface of the films under study and the validity of using the double-layer model in ellipsometric calculations is confirmed by the results of structural analyses by atomic-force microscopy.« less
Localization of Pathology on Complex Architecture Building Surfaces
NASA Astrophysics Data System (ADS)
Sidiropoulos, A. A.; Lakakis, K. N.; Mouza, V. K.
2017-02-01
The technology of 3D laser scanning is considered as one of the most common methods for heritage documentation. The point clouds that are being produced provide information of high detail, both geometric and thematic. There are various studies that examine techniques of the best exploitation of this information. In this study, an algorithm of pathology localization, such as cracks and fissures, on complex building surfaces is being tested. The algorithm makes use of the points' position in the point cloud and tries to distinguish them in two groups-patterns; pathology and non-pathology. The extraction of the geometric information that is being used for recognizing the pattern of the points is being accomplished via Principal Component Analysis (PCA) in user-specified neighborhoods in the whole point cloud. The implementation of PCA leads to the definition of the normal vector at each point of the cloud. Two tests that operate separately examine both local and global geometric criteria among the points and conclude which of them should be categorized as pathology. The proposed algorithm was tested on parts of the Gazi Evrenos Baths masonry, which are located at the city of Giannitsa at Northern Greece.
Minimized state complexity of quantum-encoded cryptic processes
NASA Astrophysics Data System (ADS)
Riechers, Paul M.; Mahoney, John R.; Aghamohammadi, Cina; Crutchfield, James P.
2016-05-01
The predictive information required for proper trajectory sampling of a stochastic process can be more efficiently transmitted via a quantum channel than a classical one. This recent discovery allows quantum information processing to drastically reduce the memory necessary to simulate complex classical stochastic processes. It also points to a new perspective on the intrinsic complexity that nature must employ in generating the processes we observe. The quantum advantage increases with codeword length: the length of process sequences used in constructing the quantum communication scheme. In analogy with the classical complexity measure, statistical complexity, we use this reduced communication cost as an entropic measure of state complexity in the quantum representation. Previously difficult to compute, the quantum advantage is expressed here in closed form using spectral decomposition. This allows for efficient numerical computation of the quantum-reduced state complexity at all encoding lengths, including infinite. Additionally, it makes clear how finite-codeword reduction in state complexity is controlled by the classical process's cryptic order, and it allows asymptotic analysis of infinite-cryptic-order processes.
ASIC For Complex Fixed-Point Arithmetic
NASA Technical Reports Server (NTRS)
Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.
1995-01-01
Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.
Systems Thinking Tools as Applied to Community-Based Participatory Research: A Case Study
ERIC Educational Resources Information Center
BeLue, Rhonda; Carmack, Chakema; Myers, Kyle R.; Weinreb-Welch, Laurie; Lengerich, Eugene J.
2012-01-01
Community-based participatory research (CBPR) is being used increasingly to address health disparities and complex health issues. The authors propose that CBPR can benefit from a systems science framework to represent the complex and dynamic characteristics of a community and identify intervention points and potential "tipping points."…
González-José, Rolando; Charlin, Judith
2012-01-01
The specific using of different prehistoric weapons is mainly determined by its physical properties, which provide a relative advantage or disadvantage to perform a given, particular function. Since these physical properties are integrated to accomplish that function, examining design variables and their pattern of integration or modularity is of interest to estimate the past function of a point. Here we analyze a composite sample of lithic points from southern Patagonia likely formed by arrows, thrown spears and hand-held points to test if they can be viewed as a two-module system formed by the blade and the stem, and to evaluate the degree in which shape, size, asymmetry, blade: stem length ratio, and tip angle explain the observed variance and differentiation among points supposedly aimed to accomplish different functions. To do so we performed a geometric morphometric analysis on 118 lithic points, departing from 24 two-dimensional landmark and semi landmarks placed on the point's contour. Klingenberg's covariational modularity tests were used to evaluate different modularity hypotheses, and a composite PCA including shape, size, asymmetry, blade: stem length ratio, and tip angle was used to estimate the importance of each attribute to explaining variation patterns. Results show that the blade and the stem can be seen as "near decomposable units" in the points integrating the studied sample. However, this modular pattern changes after removing the effects of reduction. Indeed, a resharpened point tends to show a tip/rest of the point modular pattern. The composite PCA analyses evidenced three different patterns of morphometric attributes compatible with arrows, thrown spears, and hand-held tools. Interestingly, when analyzed independently, these groups show differences in their modular organization. Our results indicate that stone tools can be approached as flexible designs, characterized by a composite set of interacting morphometric attributes, and evolving on a modular way.
González-José, Rolando; Charlin, Judith
2012-01-01
The specific using of different prehistoric weapons is mainly determined by its physical properties, which provide a relative advantage or disadvantage to perform a given, particular function. Since these physical properties are integrated to accomplish that function, examining design variables and their pattern of integration or modularity is of interest to estimate the past function of a point. Here we analyze a composite sample of lithic points from southern Patagonia likely formed by arrows, thrown spears and hand-held points to test if they can be viewed as a two-module system formed by the blade and the stem, and to evaluate the degree in which shape, size, asymmetry, blade: stem length ratio, and tip angle explain the observed variance and differentiation among points supposedly aimed to accomplish different functions. To do so we performed a geometric morphometric analysis on 118 lithic points, departing from 24 two-dimensional landmark and semi landmarks placed on the point's contour. Klingenberg's covariational modularity tests were used to evaluate different modularity hypotheses, and a composite PCA including shape, size, asymmetry, blade: stem length ratio, and tip angle was used to estimate the importance of each attribute to explaining variation patterns. Results show that the blade and the stem can be seen as “near decomposable units” in the points integrating the studied sample. However, this modular pattern changes after removing the effects of reduction. Indeed, a resharpened point tends to show a tip/rest of the point modular pattern. The composite PCA analyses evidenced three different patterns of morphometric attributes compatible with arrows, thrown spears, and hand-held tools. Interestingly, when analyzed independently, these groups show differences in their modular organization. Our results indicate that stone tools can be approached as flexible designs, characterized by a composite set of interacting morphometric attributes, and evolving on a modular way. PMID:23094104
Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm
NASA Technical Reports Server (NTRS)
Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.
1991-01-01
The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.
Iron Compounds and the Color of Soils in the Sakhalin Island
NASA Astrophysics Data System (ADS)
Vodyanitskii, Yu. N.; Kirillova, N. P.; Manakhov, D. V.; Karpukhin, M. M.
2018-02-01
Numerical parameters of soil color were studied according to the CIE-L*a*b color system before and after the Tamm's and Mehra-Jackson's treatments; we also determined the total Fe content in the samples from the main genetic horizons of the alluvial gray-humus soil, two profiles of burozems, and two profiles of podzols in the Sakhalin Island. In the analyzed samples, the numerical color parameters L* (lightness), a* (redness) and b* (yellowness) are found to vary within 46-73, 3-11, and 8-28, respectively. A linear relationship is revealed between the numerical values of a* parameters and Fe content in the Mehra-Jackson extracts; the regression equations are derived with the determination coefficients ( R 2): 0.49 (typical burozem), 0.79 (podzolized burozem), 0.96 (shallow-podzolic mucky podzol), 0.98 (gray-humus gley alluvial soil). For the surface-podzolic mucky podzol contaminated with petroleum hydrocarbons, R 2 was equal to only 0.03. In the gray humus (AY) and structural-metamorphic (BM) horizons of the studied soils, a* and b* parameters decrease after their treatment with the Tamm's reagent by 2 points on average. After the Mehra-Jackson treatment, the a* parameter decreased by 6 (AY) and 8 (BM) points; whereas b* parameter, by 10 and 15 points, respectively. In the E horizons of podzols, the Tamm's treatment increased a* and b* parameters by 1 point; whereas the Mehra-Jackson's treatment decreased these parameters by only 1 and 3 points, respectively. The redness (a*) decreased maximally in the lower gley horizon of the alluvial gray humus soil, i.e., by 6 (in the Tamm's extract) and 10 points (in the Mehra-Jackson's) extract. Yellowness (b*) decreased by 12 and 17 points, respectively. The revealed color specifics in the untreated samples and the color transformation under the impact of reagents in the studied soils and horizons may serve as an additional parameter that characterizes quantitatively the object of investigation in the reference databases.
Complex eigenvalue extraction in NASTRAN by the tridiagonal reduction (FEER) method
NASA Technical Reports Server (NTRS)
Newman, M.; Mann, F. I.
1977-01-01
An extension of the Tridiagonal Reduction (FEER) method to complex eigenvalue analysis in NASTRAN is described. As in the case of real eigenvalue analysis, the eigensolutions closest to a selected point in the eigenspectrum are extracted from a reduced, symmetric, tridiagonal eigenmatrix whose order is much lower than that of the full size problem. The reduction process is effected automatically, and thus avoids the arbitrary lumping of masses and other physical quantities at selected grid points. The statement of the algebraic eigenvalue problem admits mass, damping and stiffness matrices which are unrestricted in character, i.e., they may be real, complex, symmetric or unsymmetric, singular or non-singular.
Thorium isotopes in colloidal fraction of water from San Marcos Dam, Chihuahua, Mexico
NASA Astrophysics Data System (ADS)
Cabral-Lares, M.; Melgoza, A.; Montero-Cabrera, M. E.; Renteria-Villalobos, M.
2013-07-01
The main interest of this stiidy is to assess the contents and distribution of Th-series isotopes in colloidal fraction of surface water from San Marcos dam, because the suspended particulate matter serves as transport medium for several pollutants. The aim of this work was to assess the distribution of thorium isotopes (232Th and 230Th) contained in suspended matter. Samples were taken from three surface points along the San Marcos dam: water input, midpoint, and near to dam wall. In this last point, a depth sampling was also carried out. Here, three depth points were taken at 0.4, 8 and 15 meters. To evaluate the thorium behavior in surface water, from every water sample the colloidal fraction was separated, between 1 and 0.1 μm. Thorium isotopes concentraron in samples were obtained by alpha spectrometry. Activity concentrations obtained of 232Th and 230Th in surface points ranged from 0.3 to 0.5 Bq ṡ L-1, whereas in depth points ranged from 0.4 to 3.2 Bq ṡ L-1, respectively. The results show that 230Th is in higher concentration than 232Th in colloidal fraction. This can be attributed to a preference of these colloids to adsorb uranium. Thus, the activity ratio 230Th/232Th in colloidal fraction showed values from 2.3 to 10.2. In surface points along the dam, 230Th activity concentration decreases while 232Th concentration remains constant. On the other hand, activity concentrations of both isotopes showed a pointed out enhancement with depth. The results have shown a possible lixiviation of uranium from geological substrate into the surface water and an important fractionation of thorium isotopes, which suggest that thorium is non-homogeneously distributed along San Marcos dam.
Estimation of the auto frequency response function at unexcited points using dummy masses
NASA Astrophysics Data System (ADS)
Hosoya, Naoki; Yaginuma, Shinji; Onodera, Hiroshi; Yoshimura, Takuya
2015-02-01
If structures with complex shapes have space limitations, vibration tests using an exciter or impact hammer for the excitation are difficult. Although measuring the auto frequency response function at an unexcited point may not be practical via a vibration test, it can be obtained by assuming that the inertia acting on a dummy mass is an external force on the target structure upon exciting a different excitation point. We propose a method to estimate the auto frequency response functions at unexcited points by attaching a small mass (dummy mass), which is comparable to the accelerometer mass. The validity of the proposed method is demonstrated by comparing the auto frequency response functions estimated at unexcited points in a beam structure to those obtained from numerical simulations. We also consider random measurement errors by finite element analysis and vibration tests, but not bias errors. Additionally, the applicability of the proposed method is demonstrated by applying it to estimate the auto frequency response function of the lower arm in a car suspension.
Recurrence Density Enhanced Complex Networks for Nonlinear Time Series Analysis
NASA Astrophysics Data System (ADS)
Costa, Diego G. De B.; Reis, Barbara M. Da F.; Zou, Yong; Quiles, Marcos G.; Macau, Elbert E. N.
We introduce a new method, which is entitled Recurrence Density Enhanced Complex Network (RDE-CN), to properly analyze nonlinear time series. Our method first transforms a recurrence plot into a figure of a reduced number of points yet preserving the main and fundamental recurrence properties of the original plot. This resulting figure is then reinterpreted as a complex network, which is further characterized by network statistical measures. We illustrate the computational power of RDE-CN approach by time series by both the logistic map and experimental fluid flows, which show that our method distinguishes different dynamics sufficiently well as the traditional recurrence analysis. Therefore, the proposed methodology characterizes the recurrence matrix adequately, while using a reduced set of points from the original recurrence plots.
NASA Astrophysics Data System (ADS)
Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.
2016-06-01
Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.
NASA Astrophysics Data System (ADS)
Li, Jiekang; Li, Guirong; Han, Qian
2016-12-01
In this paper, two kinds of salophens (Sal) with different solubilities, Sal1 and Sal2, have been respectively synthesized, and they all can combine with uranyl to form stable complexes: [UO22 +-Sal1] and [UO22 +-Sal2]. Among them, [UO22 +-Sal1] was used as ligand to extract uranium in complex samples by dual cloud point extraction (dCPE), and [UO22 +-Sal2] was used as catalyst for the determination of uranium by photocatalytic resonance fluorescence (RF) method. The photocatalytic characteristic of [UO22 +-Sal2] on the oxidized pyronine Y (PRY) by potassium bromate which leads to the decrease of RF intensity of PRY were studied. The reduced value of RF intensity of reaction system (ΔF) is in proportional to the concentration of uranium (c), and a novel photo-catalytic RF method was developed for the determination of trace uranium (VI) after dCPE. The combination of photo-catalytic RF techniques and dCPE procedure endows the presented methods with enhanced sensitivity and selectivity. Under optimal conditions, the linear calibration curves range for 0.067 to 6.57 ng mL- 1, the linear regression equation was ΔF = 438.0 c (ng mL- 1) + 175.6 with the correlation coefficient r = 0.9981. The limit of detection was 0.066 ng mL- 1. The proposed method was successfully applied for the separation and determination of uranium in real samples with the recoveries of 95.0-103.5%. The mechanisms of the indicator reaction and dCPE are discussed.
Finding Out Critical Points For Real-Time Path Planning
NASA Astrophysics Data System (ADS)
Chen, Wei
1989-03-01
Path planning for a mobile robot is a classic topic, but the path planning under real-time environment is a different issue. The system sources including sampling time, processing time, processes communicating time, and memory space are very limited for this type of application. This paper presents a method which abstracts the world representation from the sensory data and makes the decision as to which point will be a potentially critical point to span the world map by using incomplete knowledge about physical world and heuristic rule. Without any previous knowledge or map of the workspace, the robot will determine the world map by roving through the workspace. The computational complexity for building and searching such a map is not more than O( n2 ) The find-path problem is well-known in robotics. Given an object with an initial location and orientation, a goal location and orientation, and a set of obstacles located in space, the problem is to find a continuous path for the object from the initial position to the goal position which avoids collisions with obstacles along the way. There are a lot of methods to find a collision-free path in given environment. Techniques for solving this problem can be classified into three approaches: 1) the configuration space approach [1],[2],[3] which represents the polygonal obstacles by vertices in a graph. The idea is to determine those parts of the free space which a reference point of the moving object can occupy without colliding with any obstacles. A path is then found for the reference point through this truly free space. Dealing with rotations turns out to be a major difficulty with the approach, requiring complex geometric algorithms which are computationally expensive. 2) the direct representation of the free space using basic shape primitives such as convex polygons [4] and overlapping generalized cones [5]. 3) the combination of technique 1 and 2 [6] by which the space is divided into the primary convex region, overlap region and obstacle region, then obstacle boundaries with attribute values are represented by the vertices of the hypergraph. The primary convex region and overlap region are represented by hyperedges, the centroids of overlap form the critical points. The difficulty is generating segment graph and estimating of minimum path width. The all techniques mentioned above need previous knowledge about the world to make path planning and the computational cost is not low. They are not available in an unknow and uncertain environment. Due to limited system resources such as CPU time, memory size and knowledge about the special application in an intelligent system (such as mobile robot), it is necessary to use algorithms that provide the good decision which is feasible with the available resources in real time rather than the best answer that could be achieved in unlimited time with unlimited resources. A real-time path planner should meet following requirements: - Quickly abstract the representation of the world from the sensory data without any previous knowledge about the robot environment. - Easily update the world model to spell out the global-path map and to reflect changes in the robot environment. - Must make a decision of where the robot must go and which direction the range sensor should point to in real time with limited resources. The method presented here assumes that the data from range sensors has been processed by signal process unite. The path planner will guide the scan of range sensor, find critical points, make decision where the robot should go and which point is poten- tial critical point, generate the path map and monitor the robot moves to the given point. The program runs recursively until the goal is reached or the whole workspace is roved through.
Anatomy of point-contact Andreev reflection spectroscopy from the experimental point of view
NASA Astrophysics Data System (ADS)
Naidyuk, Yu. G.; Gloos, K.
2018-04-01
We review applications of point-contact Andreev-reflection spectroscopy to study elemental superconductors, where theoretical conditions for the smallness of the point-contact size with respect to the characteristic lengths in the superconductor can be satisfied. We discuss existing theoretical models and identify new issues that have to be solved, especially when applying this method to investigate more complex superconductors. We will also demonstrate that some aspects of point-contact Andreev-reflection spectroscopy still need to be addressed even when investigating ordinary metals.
Using shape contexts method for registration of contra lateral breasts in thermal images.
Etehadtavakol, Mahnaz; Ng, Eddie Yin-Kwee; Gheissari, Niloofar
2014-12-10
To achieve symmetric boundaries for left and right breasts boundaries in thermal images by registration. The proposed method for registration consists of two steps. In the first step, shape context, an approach as presented by Belongie and Malik was applied for registration of two breast boundaries. The shape context is an approach to measure shape similarity. Two sets of finite sample points from shape contours of two breasts are then presented. Consequently, the correspondences between the two shapes are found. By finding correspondences, the sample point which has the most similar shape context is obtained. In this study, a line up transformation which maps one shape onto the other has been estimated in order to complete shape. The used of a thin plate spline permitted good estimation of a plane transformation which has capability to map unselective points from one shape onto the other. The obtained aligning transformation of boundaries points has been applied successfully to map the two breasts interior points. Some of advantages for using shape context method in this work are as follows: (1) no special land marks or key points are needed; (2) it is tolerant to all common shape deformation; and (3) although it is uncomplicated and straightforward to use, it gives remarkably powerful descriptor for point sets significantly upgrading point set registration. Results are very promising. The proposed algorithm was implemented for 32 cases. Boundary registration is done perfectly for 28 cases. We used shape contexts method that is simple and easy to implement to achieve symmetric boundaries for left and right breasts boundaries in thermal images.
Spear-anvil point-contact spectroscopy in pulsed magnetic fields
NASA Astrophysics Data System (ADS)
Arnold, F.; Yager, B.; Kampert, E.; Putzke, C.; Nyéki, J.; Saunders, J.
2013-11-01
We describe a new design and experimental technique for point-contact spectroscopy in non-destructive pulsed magnetic fields up to 70 {T}. Point-contact spectroscopy uses a quasi-dc four-point measurement of the current and voltage across a spear-anvil point-contact. The contact resistance could be adjusted over three orders of magnitude by a built-in fine pitch threaded screw. The first measurements using this set-up were performed on both single-crystalline and exfoliated graphite samples in a 150 {ms}, pulse length 70 {T} coil at 4.2 {K} and reproduced the well known point-contact spectrum of graphite and showed evidence for a developing high field excitation above 35 T, the onset field of the charge-density wave instability in graphite.
CINAHL and MEDLINE: a comparison of indexing practices.
Brenner, S H; McKinin, E J
1989-10-01
A random sample of fifty nursing articles indexed in both MEDLINE and CINAHL (NURSING & ALLIED HEALTH) during 1986 was used for comparing indexing practices. Indexing was analyzed by counting the number of major descriptors, the number of major and minor descriptors, the number of indexing access points, the number of common indexing access points, and the number and type of unique indexing access points. The study results indicate: there are few differences in the number of major descriptors used, MEDLINE uses almost twice as many descriptors, MEDLINE has almost twice as many indexing access points, and MEDLINE and CINAHL provide few common access points.
CINAHL and MEDLINE: a comparison of indexing practices.
Brenner, S H; McKinin, E J
1989-01-01
A random sample of fifty nursing articles indexed in both MEDLINE and CINAHL (NURSING & ALLIED HEALTH) during 1986 was used for comparing indexing practices. Indexing was analyzed by counting the number of major descriptors, the number of major and minor descriptors, the number of indexing access points, the number of common indexing access points, and the number and type of unique indexing access points. The study results indicate: there are few differences in the number of major descriptors used, MEDLINE uses almost twice as many descriptors, MEDLINE has almost twice as many indexing access points, and MEDLINE and CINAHL provide few common access points. PMID:2676049
An Increase of Intelligence in Saudi Arabia, 1977-2010
ERIC Educational Resources Information Center
Batterjee, Adel A.; Khaleefa, Omar; Ali, Khalil; Lynn, Richard
2013-01-01
Normative data for 8-15 year olds for the Standard Progressive Matrices in Saudi Arabia were obtained in 1977 and 2010. The 2010 sample obtained higher average scores than the 1977 sample by 0.78d, equivalent to 11.7 IQ points. This represents a gain of 3.55 IQ points a decade over the 33 year period. (Contains 1 table.)
DuBois, Debra C; Piel, William H; Jusko, William J
2008-01-01
High-throughput data collection using gene microarrays has great potential as a method for addressing the pharmacogenomics of complex biological systems. Similarly, mechanism-based pharmacokinetic/pharmacodynamic modeling provides a tool for formulating quantitative testable hypotheses concerning the responses of complex biological systems. As the response of such systems to drugs generally entails cascades of molecular events in time, a time series design provides the best approach to capturing the full scope of drug effects. A major problem in using microarrays for high-throughput data collection is sorting through the massive amount of data in order to identify probe sets and genes of interest. Due to its inherent redundancy, a rich time series containing many time points and multiple samples per time point allows for the use of less stringent criteria of expression, expression change and data quality for initial filtering of unwanted probe sets. The remaining probe sets can then become the focus of more intense scrutiny by other methods, including temporal clustering, functional clustering and pharmacokinetic/pharmacodynamic modeling, which provide additional ways of identifying the probes and genes of pharmacological interest. PMID:15212590
Generation of digitized microfluidic filling flow by vent control.
Yoon, Junghyo; Lee, Eundoo; Kim, Jaehoon; Han, Sewoon; Chung, Seok
2017-06-15
Quantitative microfluidic point-of-care testing has been translated into clinical applications to support a prompt decision on patient treatment. A nanointerstice-driven filling technique has been developed to realize the fast and robust filling of microfluidic channels with liquid samples, but it has failed to provide a consistent filling time owing to the wide variation in liquid viscosity, resulting in an increase in quantification errors. There is a strong demand for simple and quick flow control to ensure accurate quantification, without a serious increase in system complexity. A new control mechanism employing two-beam refraction and one solenoid valve was developed and found to successfully generate digitized filling flow, completely free from errors due to changes in viscosity. The validity of digitized filling flow was evaluated by the immunoassay, using liquids with a wide range of viscosity. This digitized microfluidic filling flow is a novel approach that could be applied in conventional microfluidic point-of-care testing. Copyright © 2016 Elsevier B.V. All rights reserved.
Facilitating surgeon understanding of complex anatomy using a three-dimensional printed model.
Cromeens, Barrett P; Ray, William C; Hoehne, Brad; Abayneh, Fikir; Adler, Brent; Besner, Gail E
2017-08-01
3-dimensional prints (3DP) anecdotally facilitate surgeon understanding of anatomy and decision-making. However, the actual benefit to surgeons or patients has not been quantified. This study investigates how surgeon understanding of complex anatomy is altered by a 3DP compared to computed tomography (CT) scan or CT + digital reconstruction (CT + DR). Key anatomic features were segmented from a CT-abdomen/pelvis of pygopagus twins to build a DR and printed in color on a 3D printer. Pediatric surgery trainees and attendings (n = 21) were tested regarding anatomy identification and their understanding of point-to-point distances, scale, and shape. There was no difference between media regarding point-to-point distances. The 3DP led to an increased number of correct answers for questions of scale and shape compared to CT (P < 0.05). CT + DR performance was intermediate but not statistically different from 3DP or CT. Identification of anatomy was inconsistent between media; however, answers were significantly closer to correct when using the 3DP. Participants completed the test faster with the 3DP (6.6 ± 0.5 min) (P < 0.05) than with CT (18.9 ± 2.5 min) or CT + 3DR (14.9 ± 1.5 min). Although point-to-point measurements were not different, 3DP increased the understanding of shape, scale, and anatomy. It enabled understanding significantly faster than other media. In difficult surgical cases with complex anatomy and a need for efficient multidisciplinary coordination, 3D printed models should be considered for surgical planning. Copyright © 2017 Elsevier Inc. All rights reserved.
Resilience and tipping points of an exploited fish population over six decades.
Vasilakopoulos, Paraskevas; Marshall, C Tara
2015-05-01
Complex natural systems with eroded resilience, such as populations, ecosystems and socio-ecological systems, respond to small perturbations with abrupt, discontinuous state shifts, or critical transitions. Theory of critical transitions suggests that such systems exhibit fold bifurcations featuring folded response curves, tipping points and alternate attractors. However, there is little empirical evidence of fold bifurcations occurring in actual complex natural systems impacted by multiple stressors. Moreover, resilience of complex systems to change currently lacks clear operational measures with generic application. Here, we provide empirical evidence for the occurrence of a fold bifurcation in an exploited fish population and introduce a generic measure of ecological resilience based on the observed fold bifurcation attributes. We analyse the multivariate development of Barents Sea cod (Gadus morhua), which is currently the world's largest cod stock, over six decades (1949-2009), and identify a population state shift in 1981. By plotting a multivariate population index against a multivariate stressor index, the shift mechanism was revealed suggesting that the observed population shift was a nonlinear response to the combined effects of overfishing and climate change. Annual resilience values were estimated based on the position of each year in relation to the fitted attractors and assumed tipping points of the fold bifurcation. By interpolating the annual resilience values, a folded stability landscape was fit, which was shaped as predicted by theory. The resilience assessment suggested that the population may be close to another tipping point. This study illustrates how a multivariate analysis, supported by theory of critical transitions and accompanied by a quantitative resilience assessment, can clarify shift mechanisms in data-rich complex natural systems. © 2014 John Wiley & Sons Ltd.
Faridounnia, Maryam; Wienk, Hans; Kovačič, Lidija; Folkers, Gert E; Jaspers, Nicolaas G J; Kaptein, Robert; Hoeijmakers, Jan H J; Boelens, Rolf
2015-08-14
The ERCC1-XPF heterodimer, a structure-specific DNA endonuclease, is best known for its function in the nucleotide excision repair (NER) pathway. The ERCC1 point mutation F231L, located at the hydrophobic interaction interface of ERCC1 (excision repair cross-complementation group 1) and XPF (xeroderma pigmentosum complementation group F), leads to severe NER pathway deficiencies. Here, we analyze biophysical properties and report the NMR structure of the complex of the C-terminal tandem helix-hairpin-helix domains of ERCC1-XPF that contains this mutation. The structures of wild type and the F231L mutant are very similar. The F231L mutation results in only a small disturbance of the ERCC1-XPF interface, where, in contrast to Phe(231), Leu(231) lacks interactions stabilizing the ERCC1-XPF complex. One of the two anchor points is severely distorted, and this results in a more dynamic complex, causing reduced stability and an increased dissociation rate of the mutant complex as compared with wild type. These data provide a biophysical explanation for the severe NER deficiencies caused by this mutation. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Haghshenas, Maryam; Akbari, Mohammad Taghi; Karizi, Shohreh Zare; Deilamani, Faravareh Khordadpoor; Nafissi, Shahriar; Salehi, Zivar
2016-06-01
Duchenne and Becker muscular dystrophies (DMD and BMD) are X-linked neuromuscular diseases characterized by progressive muscular weakness and degeneration of skeletal muscles. Approximately two-thirds of the patients have large deletions or duplications in the dystrophin gene and the remaining one-third have point mutations. This study was performed to evaluate point mutations in Iranian DMD/BMD male patients. A total of 29 DNA samples from patients who did not show any large deletion/duplication mutations following multiplex polymerase chain reaction (PCR) and multiplex ligation-dependent probe amplification (MLPA) screening were sequenced for detection of point mutations in exons 50-79. Also exon 44 was sequenced in one sample in which a false positive deletion was detected by MLPA method. Cycle sequencing revealed four nonsense, one frameshift and two splice site mutations as well as two missense variants.
Using machine learning tools to model complex toxic interactions with limited sampling regimes.
Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W
2013-03-19
A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.
McKenzie, Brittney A.
2017-01-01
Measuring the temperature of a sample is a fundamental need in many biological and chemical processes. When the volume of the sample is on the microliter or nanoliter scale (e.g., cells, microorganisms, precious samples, or samples in microfluidic devices), accurate measurement of the sample temperature becomes challenging. In this work, we demonstrate a technique for accurately determining the temperature of microliter volumes using a simple 3D-printed microfluidic chip. We accomplish this by first filling “microfluidic thermometer” channels on the chip with substances with precisely known freezing/melting points. We then use a thermoelectric cooler to create a stable and linear temperature gradient along these channels within a measurement region on the chip. A custom software tool (available as online Supporting Information) is then used to find the locations of solid-liquid interfaces in the thermometer channels; these locations have known temperatures equal to the freezing/melting points of the substances in the channels. The software then uses the locations of these interfaces to calculate the temperature at any desired point within the measurement region. Using this approach, the temperature of any microliter-scale on-chip sample can be measured with an uncertainty of about a quarter of a degree Celsius. As a proof-of-concept, we use this technique to measure the unknown freezing point of a 50 microliter volume of solution and demonstrate its feasibility on a 400 nanoliter sample. Additionally, this technique can be used to measure the temperature of any on-chip sample, not just near-zero-Celsius freezing points. We demonstrate this by using an oil that solidifies near room temperature (coconut oil) in a microfluidic thermometer to measure on-chip temperatures well above zero Celsius. By providing a low-cost and simple way to accurately measure temperatures in small volumes, this technique should find applications in both research and educational laboratories. PMID:29284028
Circular motion geometry using minimal data.
Jiang, Guang; Quan, Long; Tsui, Hung-Tat
2004-06-01
Circular motion or single axis motion is widely used in computer vision and graphics for 3D model acquisition. This paper describes a new and simple method for recovering the geometry of uncalibrated circular motion from a minimal set of only two points in four images. This problem has been previously solved using nonminimal data either by computing the fundamental matrix and trifocal tensor in three images or by fitting conics to tracked points in five or more images. It is first established that two sets of tracked points in different images under circular motion for two distinct space points are related by a homography. Then, we compute a plane homography from a minimal two points in four images. After that, we show that the unique pair of complex conjugate eigenvectors of this homography are the image of the circular points of the parallel planes of the circular motion. Subsequently, all other motion and structure parameters are computed from this homography in a straighforward manner. The experiments on real image sequences demonstrate the simplicity, accuracy, and robustness of the new method.
Determination of geostatistically representative sampling locations in Porsuk Dam Reservoir (Turkey)
NASA Astrophysics Data System (ADS)
Aksoy, A.; Yenilmez, F.; Duzgun, S.
2013-12-01
Several factors such as wind action, bathymetry and shape of a lake/reservoir, inflows, outflows, point and diffuse pollution sources result in spatial and temporal variations in water quality of lakes and reservoirs. The guides by the United Nations Environment Programme and the World Health Organization to design and implement water quality monitoring programs suggest that even a single monitoring station near the center or at the deepest part of a lake will be sufficient to observe long-term trends if there is good horizontal mixing. In stratified water bodies, several samples can be required. According to the guide of sampling and analysis under the Turkish Water Pollution Control Regulation, a minimum of five sampling locations should be employed to characterize the water quality in a reservoir or a lake. The European Union Water Framework Directive (2000/60/EC) states to select a sufficient number of monitoring sites to assess the magnitude and impact of point and diffuse sources and hydromorphological pressures in designing a monitoring program. Although existing regulations and guidelines include frameworks for the determination of sampling locations in surface waters, most of them do not specify a procedure in establishment of monitoring aims with representative sampling locations in lakes and reservoirs. In this study, geostatistical tools are used to determine the representative sampling locations in the Porsuk Dam Reservoir (PDR). Kernel density estimation and kriging were used in combination to select the representative sampling locations. Dissolved oxygen and specific conductivity were measured at 81 points. Sixteen of them were used for validation. In selection of the representative sampling locations, care was given to keep similar spatial structure in distributions of measured parameters. A procedure was proposed for that purpose. Results indicated that spatial structure was lost under 30 sampling points. This was as a result of varying water quality in the reservoir due to inflows, point and diffuse inputs, and reservoir hydromorphology. Moreover, hot spots were determined based on kriging and standard error maps. Locations of minimum number of sampling points that represent the actual spatial structure of DO distribution in the Porsuk Dam Reservoir
Faridounnia, Maryam; Wienk, Hans; Kovačič, Lidija; Folkers, Gert E.; Jaspers, Nicolaas G. J.; Kaptein, Robert; Hoeijmakers, Jan H. J.; Boelens, Rolf
2015-01-01
The ERCC1-XPF heterodimer, a structure-specific DNA endonuclease, is best known for its function in the nucleotide excision repair (NER) pathway. The ERCC1 point mutation F231L, located at the hydrophobic interaction interface of ERCC1 (excision repair cross-complementation group 1) and XPF (xeroderma pigmentosum complementation group F), leads to severe NER pathway deficiencies. Here, we analyze biophysical properties and report the NMR structure of the complex of the C-terminal tandem helix-hairpin-helix domains of ERCC1-XPF that contains this mutation. The structures of wild type and the F231L mutant are very similar. The F231L mutation results in only a small disturbance of the ERCC1-XPF interface, where, in contrast to Phe231, Leu231 lacks interactions stabilizing the ERCC1-XPF complex. One of the two anchor points is severely distorted, and this results in a more dynamic complex, causing reduced stability and an increased dissociation rate of the mutant complex as compared with wild type. These data provide a biophysical explanation for the severe NER deficiencies caused by this mutation. PMID:26085086
Proteins evolve on the edge of supramolecular self-assembly.
Garcia-Seisdedos, Hector; Empereur-Mot, Charly; Elad, Nadav; Levy, Emmanuel D
2017-08-10
The self-association of proteins into symmetric complexes is ubiquitous in all kingdoms of life. Symmetric complexes possess unique geometric and functional properties, but their internal symmetry can pose a risk. In sickle-cell disease, the symmetry of haemoglobin exacerbates the effect of a mutation, triggering assembly into harmful fibrils. Here we examine the universality of this mechanism and its relation to protein structure geometry. We introduced point mutations solely designed to increase surface hydrophobicity among 12 distinct symmetric complexes from Escherichia coli. Notably, all responded by forming supramolecular assemblies in vitro, as well as in vivo upon heterologous expression in Saccharomyces cerevisiae. Remarkably, in four cases, micrometre-long fibrils formed in vivo in response to a single point mutation. Biophysical measurements and electron microscopy revealed that mutants self-assembled in their folded states and so were not amyloid-like. Structural examination of 73 mutants identified supramolecular assembly hot spots predictable by geometry. A subsequent structural analysis of 7,471 symmetric complexes showed that geometric hot spots were buffered chemically by hydrophilic residues, suggesting a mechanism preventing mis-assembly of these regions. Thus, point mutations can frequently trigger folded proteins to self-assemble into higher-order structures. This potential is counterbalanced by negative selection and can be exploited to design nanomaterials in living cells.
Proteins evolve on the edge of supramolecular self-assembly
NASA Astrophysics Data System (ADS)
Garcia-Seisdedos, Hector; Empereur-Mot, Charly; Elad, Nadav; Levy, Emmanuel D.
2017-08-01
The self-association of proteins into symmetric complexes is ubiquitous in all kingdoms of life. Symmetric complexes possess unique geometric and functional properties, but their internal symmetry can pose a risk. In sickle-cell disease, the symmetry of haemoglobin exacerbates the effect of a mutation, triggering assembly into harmful fibrils. Here we examine the universality of this mechanism and its relation to protein structure geometry. We introduced point mutations solely designed to increase surface hydrophobicity among 12 distinct symmetric complexes from Escherichia coli. Notably, all responded by forming supramolecular assemblies in vitro, as well as in vivo upon heterologous expression in Saccharomyces cerevisiae. Remarkably, in four cases, micrometre-long fibrils formed in vivo in response to a single point mutation. Biophysical measurements and electron microscopy revealed that mutants self-assembled in their folded states and so were not amyloid-like. Structural examination of 73 mutants identified supramolecular assembly hot spots predictable by geometry. A subsequent structural analysis of 7,471 symmetric complexes showed that geometric hot spots were buffered chemically by hydrophilic residues, suggesting a mechanism preventing mis-assembly of these regions. Thus, point mutations can frequently trigger folded proteins to self-assemble into higher-order structures. This potential is counterbalanced by negative selection and can be exploited to design nanomaterials in living cells.
An efficient sampling technique for sums of bandpass functions
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1982-01-01
A well known sampling theorem states that a bandlimited function can be completely determined by its values at a uniformly placed set of points whose density is at least twice the highest frequency component of the function (Nyquist rate). A less familiar but important sampling theorem states that a bandlimited narrowband function can be completely determined by its values at a properly chosen, nonuniformly placed set of points whose density is at least twice the passband width. This allows for efficient digital demodulation of narrowband signals, which are common in sonar, radar and radio interferometry, without the side effect of signal group delay from an analog demodulator. This theorem was extended by developing a technique which allows a finite sum of bandlimited narrowband functions to be determined by its values at a properly chosen, nonuniformly placed set of points whose density can be made arbitrarily close to the sum of the passband widths.
Crumb Rubber-Concrete Panels Under Blast Loads
2010-05-01
and the samples were labeled. Samples were picked up with an overhead crane and a form spreader connected to two points on the sample, each outside...uniform loading. Shortly after test started 8 to 9 cracks developed within quarter points and 2 cracks developed through pick points where form spreader ...dynamic behaviour of recycled tyre rubber-filled concrete.” Cem. Concr. Res., 32, 1587–1596. Huang, B., Li, G., Pang, S. S., and Eggers, J. (2004
Minet, E P; Goodhue, R; Meier-Augenstein, W; Kalin, R M; Fenton, O; Richards, K G; Coxon, C E
2017-11-01
Excessive nitrate (NO 3 - ) concentration in groundwater raises health and environmental issues that must be addressed by all European Union (EU) member states under the Nitrates Directive and the Water Framework Directive. The identification of NO 3 - sources is critical to efficiently control or reverse NO 3 - contamination that affects many aquifers. In that respect, the use of stable isotope ratios 15 N/ 14 N and 18 O/ 16 O in NO 3 - (expressed as δ 15 N-NO 3 - and δ 18 O-NO 3 - , respectively) has long shown its value. However, limitations exist in complex environments where multiple nitrogen (N) sources coexist. This two-year study explores a method for improved NO 3 - source investigation in a shallow unconfined aquifer with mixed N inputs and a long established NO 3 - problem. In this tillage-dominated area of free-draining soil and subsoil, suspected NO 3 - sources were diffuse applications of artificial fertiliser and organic point sources (septic tanks and farmyards). Bearing in mind that artificial diffuse sources were ubiquitous, groundwater samples were first classified according to a combination of two indicators relevant of point source contamination: presence/absence of organic point sources (i.e. septic tank and/or farmyard) near sampling wells and exceedance/non-exceedance of a contamination threshold value for sodium (Na + ) in groundwater. This classification identified three contamination groups: agricultural diffuse source but no point source (D+P-), agricultural diffuse and point source (D+P+) and agricultural diffuse but point source occurrence ambiguous (D+P±). Thereafter δ 15 N-NO 3 - and δ 18 O-NO 3 - data were superimposed on the classification. As δ 15 N-NO 3 - was plotted against δ 18 O-NO 3 - , comparisons were made between the different contamination groups. Overall, both δ variables were significantly and positively correlated (p < 0.0001, r s = 0.599, slope of 0.5), which was indicative of denitrification. An inspection of the contamination groups revealed that denitrification did not occur in the absence of point source contamination (group D+P-). In fact, strong significant denitrification lines occurred only in the D+P+ and D+P± groups (p < 0.0001, r s > 0.6, 0.53 ≤ slope ≤ 0.76), i.e. where point source contamination was characterised or suspected. These lines originated from the 2-6‰ range for δ 15 N-NO 3 - , which suggests that i) NO 3 - contamination was dominated by an agricultural diffuse N source (most likely the large organic matter pool that has incorporated 15 N-depleted nitrogen from artificial fertiliser in agricultural soils and whose nitrification is stimulated by ploughing and fertilisation) rather than point sources and ii) denitrification was possibly favoured by high dissolved organic content (DOC) from point sources. Combining contamination indicators and a large stable isotope dataset collected over a large study area could therefore improve our understanding of the NO 3 - contamination processes in groundwater for better land use management. We hypothesise that in future research, additional contamination indicators (e.g. pharmaceutical molecules) could also be combined to disentangle NO 3 - contamination from animal and human wastes. Copyright © 2017 Elsevier Ltd. All rights reserved.
1. GENERAL VIEW OF WEST FACE OF ENTRY CONTROL POINT ...
1. GENERAL VIEW OF WEST FACE OF ENTRY CONTROL POINT (BLDG. 768) SHOWING RELATIVE POSITION TO TECHNICAL SUPPORT BUILDING (BLDG. 762/762A) AND SLC-3 AIR FORCE BUILDING (BLDG. 761) - Vandenberg Air Force Base, Space Launch Complex 3, Entry Control Point, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
"I'm Ambivalent about It": The Dilemmas of PowerPoint
ERIC Educational Resources Information Center
Hill, Andrea; Arford, Tammi; Lubitow, Amy; Smollin, Leandra M.
2012-01-01
The increasing ubiquity of PowerPoint in the university classroom raises complex questions about pedagogy and the creation of dynamic and effective learning environments. Though much of the sociological teaching literature has focused on engagement and active learning, very little of this work has addressed the presence of PowerPoint in sociology…
Morphological Effects in Auditory Word Recognition: Evidence from Danish
ERIC Educational Resources Information Center
Balling, Laura Winther; Baayen, R. Harald
2008-01-01
In this study, we investigate the processing of morphologically complex words in Danish using auditory lexical decision. We document a second critical point in auditory comprehension in addition to the Uniqueness Point (UP), namely the point at which competing morphological continuation forms of the base cease to be compatible with the input,…
The Impact of Soil Sampling Errors on Variable Rate Fertilization
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. L. Hoskinson; R C. Rope; L G. Blackwood
2004-07-01
Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and amore » predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.« less
Everolimus-Eluting Stents or Bypass Surgery for Left Main Coronary Artery Disease.
Stone, Gregg W; Sabik, Joseph F; Serruys, Patrick W; Simonton, Charles A; Généreux, Philippe; Puskas, John; Kandzari, David E; Morice, Marie-Claude; Lembo, Nicholas; Brown, W Morris; Taggart, David P; Banning, Adrian; Merkely, Béla; Horkay, Ferenc; Boonstra, Piet W; van Boven, Ad J; Ungi, Imre; Bogáts, Gabor; Mansour, Samer; Noiseux, Nicolas; Sabaté, Manel; Pomar, José; Hickey, Mark; Gershlick, Anthony; Buszman, Pawel; Bochenek, Andrzej; Schampaert, Erick; Pagé, Pierre; Dressler, Ovidiu; Kosmidou, Ioanna; Mehran, Roxana; Pocock, Stuart J; Kappetein, A Pieter
2016-12-08
Patients with obstructive left main coronary artery disease are usually treated with coronary-artery bypass grafting (CABG). Randomized trials have suggested that drug-eluting stents may be an acceptable alternative to CABG in selected patients with left main coronary disease. We randomly assigned 1905 eligible patients with left main coronary artery disease of low or intermediate anatomical complexity to undergo either percutaneous coronary intervention (PCI) with fluoropolymer-based cobalt-chromium everolimus-eluting stents (PCI group, 948 patients) or CABG (CABG group, 957 patients). Anatomic complexity was assessed at the sites and defined by a Synergy between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery (SYNTAX) score of 32 or lower (the SYNTAX score reflects a comprehensive angiographic assessment of the coronary vasculature, with 0 as the lowest score and higher scores [no upper limit] indicating more complex coronary anatomy). The primary end point was the rate of a composite of death from any cause, stroke, or myocardial infarction at 3 years, and the trial was powered for noninferiority testing of the primary end point (noninferiority margin, 4.2 percentage points). Major secondary end points included the rate of a composite of death from any cause, stroke, or myocardial infarction at 30 days and the rate of a composite of death, stroke, myocardial infarction, or ischemia-driven revascularization at 3 years. Event rates were based on Kaplan-Meier estimates in time-to-first-event analyses. At 3 years, a primary end-point event had occurred in 15.4% of the patients in the PCI group and in 14.7% of the patients in the CABG group (difference, 0.7 percentage points; upper 97.5% confidence limit, 4.0 percentage points; P=0.02 for noninferiority; hazard ratio, 1.00; 95% confidence interval, 0.79 to 1.26; P=0.98 for superiority). The secondary end-point event of death, stroke, or myocardial infarction at 30 days occurred in 4.9% of the patients in the PCI group and in 7.9% in the CABG group (P<0.001 for noninferiority, P=0.008 for superiority). The secondary end-point event of death, stroke, myocardial infarction, or ischemia-driven revascularization at 3 years occurred in 23.1% of the patients in the PCI group and in 19.1% in the CABG group (P=0.01 for noninferiority, P=0.10 for superiority). In patients with left main coronary artery disease and low or intermediate SYNTAX scores by site assessment, PCI with everolimus-eluting stents was noninferior to CABG with respect to the rate of the composite end point of death, stroke, or myocardial infarction at 3 years. (Funded by Abbott Vascular; EXCEL ClinicalTrials.gov number, NCT01205776 .).
Tribological behaviour and statistical experimental design of sintered iron-copper based composites
NASA Astrophysics Data System (ADS)
Popescu, Ileana Nicoleta; Ghiţă, Constantin; Bratu, Vasile; Palacios Navarro, Guillermo
2013-11-01
The sintered iron-copper based composites for automotive brake pads have a complex composite composition and should have good physical, mechanical and tribological characteristics. In this paper, we obtained frictional composites by Powder Metallurgy (P/M) technique and we have characterized them by microstructural and tribological point of view. The morphology of raw powders was determined by SEM and the surfaces of obtained sintered friction materials were analyzed by ESEM, EDS elemental and compo-images analyses. One lot of samples were tested on a "pin-on-disc" type wear machine under dry sliding conditions, at applied load between 3.5 and 11.5 × 10-1 MPa and 12.5 and 16.9 m/s relative speed in braking point at constant temperature. The other lot of samples were tested on an inertial test stand according to a methodology simulating the real conditions of dry friction, at a contact pressure of 2.5-3 MPa, at 300-1200 rpm. The most important characteristics required for sintered friction materials are high and stable friction coefficient during breaking and also, for high durability in service, must have: low wear, high corrosion resistance, high thermal conductivity, mechanical resistance and thermal stability at elevated temperature. Because of the tribological characteristics importance (wear rate and friction coefficient) of sintered iron-copper based composites, we predicted the tribological behaviour through statistical analysis. For the first lot of samples, the response variables Yi (represented by the wear rate and friction coefficient) have been correlated with x1 and x2 (the code value of applied load and relative speed in braking points, respectively) using a linear factorial design approach. We obtained brake friction materials with improved wear resistance characteristics and high and stable friction coefficients. It has been shown, through experimental data and obtained linear regression equations, that the sintered composites wear rate increases with increasing applied load and relative speed, but in the same conditions, the frictional coefficients slowly decrease.
Pan, Feng; Tao, Guohua
2013-03-07
Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.
Hongyi Xu; Barbic, Jernej
2017-01-01
We present an algorithm for fast continuous collision detection between points and signed distance fields, and demonstrate how to robustly use it for 6-DoF haptic rendering of contact between objects with complex geometry. Continuous collision detection is often needed in computer animation, haptics, and virtual reality applications, but has so far only been investigated for polygon (triangular) geometry representations. We demonstrate how to robustly and continuously detect intersections between points and level sets of the signed distance field. We suggest using an octree subdivision of the distance field for fast traversal of distance field cells. We also give a method to resolve continuous collisions between point clouds organized into a tree hierarchy and a signed distance field, enabling rendering of contact between rigid objects with complex geometry. We investigate and compare two 6-DoF haptic rendering methods now applicable to point-versus-distance field contact for the first time: continuous integration of penalty forces, and a constraint-based method. An experimental comparison to discrete collision detection demonstrates that the continuous method is more robust and can correctly resolve collisions even under high velocities and during complex contact.
Remote temperature-set-point controller
Burke, W.F.; Winiecki, A.L.
1984-10-17
An instrument is described for carrying out mechanical strain tests on metallic samples with the addition of means for varying the temperature with strain. The instrument includes opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Remote temperature-set-point controller
Burke, William F.; Winiecki, Alan L.
1986-01-01
An instrument for carrying out mechanical strain tests on metallic samples with the addition of an electrical system for varying the temperature with strain, the instrument including opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Does Maltreatment in Childhood Affect Sexual Orientation in Adulthood?
Roberts, Andrea L.; Glymour, M. Maria; Koenen, Karestan C.
2012-01-01
Epidemiological studies find a positive association between physical and sexual abuse, neglect, and witnessing violence in childhood and same-sex sexuality in adulthood, but studies directly assessing the association between these diverse types of maltreatment and sexuality cannot disentangle the causal direction because the sequencing of maltreatment and emerging sexuality is difficult to ascertain. Nascent same-sex orientation may increase risk of maltreatment; alternatively, maltreatment may shape sexual orientation. Our study used instrumental variable models based on family characteristics that predict maltreatment but are not plausibly influenced by sexual orientation (e.g., having a stepparent) as natural experiments to investigate whether maltreatment might increase the likelihood of same-sex sexuality in a nationally representative sample (n = 34,653). In instrumental variable models, history of sexual abuse predicted increased prevalence of same-sex attraction by 2.0 percentage points (95% confidence interval [CI] = 1.4, 2.5), any same-sex partners by 1.4 percentage points (95% CI = 1.0, 1.9), and same-sex identity by 0.7 percentage points (95% CI = 0.4, 0.9). Effects of sexual abuse on men’s sexual orientation were substantially larger than on women’s. Effects of non-sexual maltreatment were significant only for men and women’s sexual identity and women’s same-sex partners. While point estimates suggest much of the association between maltreatment and sexual orientation may be due to the effects of maltreatment on sexual orientation, confidence intervals were wide. Our results suggest that causal relationships driving the association between sexual orientation and childhood abuse may be bidirectional, may differ by type of abuse, and may differ by sex. Better understanding of this potentially complex causal structure is critical to developing targeted strategies to reduce sexual orientation disparities in exposure to abuse. PMID:22976519
NASA Astrophysics Data System (ADS)
Yun, Wanying; Lu, Zhenzhou; Jiang, Xian
2018-06-01
To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.
A program code generator for multiphysics biological simulation using markup languages.
Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi
2012-01-01
To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.
Silvey, Garry M.; Macri, Jennifer M.; Lee, Paul P.; Lobach, David F.
2005-01-01
New mobile computing devices including personal digital assistants (PDAs) and tablet computers have emerged to facilitate data collection at the point of care. Unfortunately, little research has been reported regarding which device is optimal for a given care setting. In this study we created and compared functionally identical applications on a Palm operating system-based PDA and a Windows-based tablet computer for point-of-care documentation of clinical observations by eye care professionals when caring for patients with diabetes. Eye-care professionals compared the devices through focus group sessions and through validated usability surveys. We found that the application on the tablet computer was preferred over the PDA for documenting the complex data related to eye care. Our findings suggest that the selection of a mobile computing platform depends on the amount and complexity of the data to be entered; the tablet computer functions better for high volume, complex data entry, and the PDA, for low volume, simple data entry. PMID:16779128
From Invention to Innovation: Risk Analysis to Integrate One Health Technology in the Dairy Farm
Lombardo, Andrea; Boselli, Carlo; Amatiste, Simonetta; Ninci, Simone; Frazzoli, Chiara; Dragone, Roberto; De Rossi, Alberto; Grasso, Gerardo; Mantovani, Alberto; Brajon, Giovanni
2017-01-01
Current Hazard Analysis Critical Control Points (HACCP) approaches mainly fit for food industry, while their application in primary food production is still rudimentary. The European food safety framework calls for science-based support to the primary producers’ mandate for legal, scientific, and ethical responsibility in food supply. The multidisciplinary and interdisciplinary project ALERT pivots on the development of the technological invention (BEST platform) and application of its measurable (bio)markers—as well as scientific advances in risk analysis—at strategic points of the milk chain for time and cost-effective early identification of unwanted and/or unexpected events of both microbiological and toxicological nature. Health-oriented innovation is complex and subject to multiple variables. Through field activities in a dairy farm in central Italy, we explored individual components of the dairy farm system to overcome concrete challenges for the application of translational science in real life and (veterinary) public health. Based on an HACCP-like approach in animal production, the farm characterization focused on points of particular attention (POPAs) and critical control points to draw a farm management decision tree under the One Health view (environment, animal health, food safety). The analysis was based on the integrated use of checklists (environment; agricultural and zootechnical practices; animal health and welfare) and laboratory analyses of well water, feed and silage, individual fecal samples, and bulk milk. The understanding of complex systems is a condition to accomplish true innovation through new technologies. BEST is a detection and monitoring system in support of production security, quality and safety: a grid of its (bio)markers can find direct application in critical points for early identification of potential hazards or anomalies. The HACCP-like self-monitoring in primary production is feasible, as well as the biomonitoring of live food producing animals as sentinel population for One Health. PMID:29218304
From Invention to Innovation: Risk Analysis to Integrate One Health Technology in the Dairy Farm.
Lombardo, Andrea; Boselli, Carlo; Amatiste, Simonetta; Ninci, Simone; Frazzoli, Chiara; Dragone, Roberto; De Rossi, Alberto; Grasso, Gerardo; Mantovani, Alberto; Brajon, Giovanni
2017-01-01
Current Hazard Analysis Critical Control Points (HACCP) approaches mainly fit for food industry, while their application in primary food production is still rudimentary. The European food safety framework calls for science-based support to the primary producers' mandate for legal, scientific, and ethical responsibility in food supply. The multidisciplinary and interdisciplinary project ALERT pivots on the development of the technological invention (BEST platform) and application of its measurable (bio)markers-as well as scientific advances in risk analysis-at strategic points of the milk chain for time and cost-effective early identification of unwanted and/or unexpected events of both microbiological and toxicological nature. Health-oriented innovation is complex and subject to multiple variables. Through field activities in a dairy farm in central Italy, we explored individual components of the dairy farm system to overcome concrete challenges for the application of translational science in real life and (veterinary) public health. Based on an HACCP-like approach in animal production, the farm characterization focused on points of particular attention (POPAs) and critical control points to draw a farm management decision tree under the One Health view (environment, animal health, food safety). The analysis was based on the integrated use of checklists (environment; agricultural and zootechnical practices; animal health and welfare) and laboratory analyses of well water, feed and silage, individual fecal samples, and bulk milk. The understanding of complex systems is a condition to accomplish true innovation through new technologies. BEST is a detection and monitoring system in support of production security, quality and safety: a grid of its (bio)markers can find direct application in critical points for early identification of potential hazards or anomalies. The HACCP-like self-monitoring in primary production is feasible, as well as the biomonitoring of live food producing animals as sentinel population for One Health.
NASA Astrophysics Data System (ADS)
Wan, L. G.; Lin, Q.; Bian, D. J.; Ren, Q. K.; Xiao, Y. B.; Lu, W. X.
2018-02-01
In order to reveal the spatial difference of the bacterial community structure in the Micro-pressure Air-lift Loop Reactor, the activated sludge bacterial at five different representative sites in the reactor were studied by denaturing gradient gel electrophoresis (DGGE). The results of DGGE showed that the difference of environmental conditions (such as substrate concentration, dissolved oxygen and PH, etc.) resulted in different diversity and similarity of microbial flora in different spatial locations. The Shannon-Wiener diversity index of the total bacterial samples from five sludge samples varied from 0.92 to 1.28, the biodiversity index was the smallest at point 5, and the biodiversity index was the highest at point 2. The similarity of the flora between the point 2, 3 and 4 was 80% or more, respectively. The similarity of the flora between the point 5 and the other samples was below 70%, and the similarity of point 2 was only 59.2%. Due to the different contribution of different strains to the removal of pollutants, it can give full play to the synergistic effect of bacterial degradation of pollutants, and further improve the efficiency of sewage treatment.
NASA Astrophysics Data System (ADS)
Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai
2008-04-01
Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.
Frequency-Dependence of Relative Permeability in Steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowler, N.
2006-03-06
A study to characterize metal plates by means of a model-based, broadband, four-point potential drop measurement technique has shown that the relative permeability of alloy 1018 low-carbon steel is complex and a function of frequency. A magnetic relaxation is observed at approximately 5 kHz. The relaxation can be described in terms of a parametric (Cole-Cole) model. Factors which influence the frequency, amplitude and breadth of the relaxation, such as applied current amplitude, sample geometry and disorder (e.g. percent carbon content and surface condition), are considered.
GKI chloride in water, analysis method. GKI boron in water, analysis method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morriss, L.L.
1979-05-01
Procedures for the chemical analysis of chlorides and boron in water are presented. Chlorides can be titrated with mercuric nitrate to form mercuric chloride. At pH 2.3 to 2.8, diphenylcarbazone indicates the end point of this titration by formation of a purple complex with mercury ions. When a sample of water containing boron is acidified and evaporated in the presence of curcumin, a red colored product called rosocyanine is formed. This is dissolved and can be measured photometrically or visually. (DMC)
Stough, Con; King, Rebecca; Papafotiou, Katherine; Swann, Phillip; Ogden, Edward; Wesnes, Keith; Downey, Luke A
2012-04-01
This study investigated the acute (3-h) and 24-h post-dose cognitive effects of oral 3,4-methylenedioxymethamphetamine (MDMA), d-methamphetamine, and placebo in a within-subject double-blind laboratory-based study in order to compare the effect of these two commonly used illicit drugs on a large number of recreational drug users. Sixty-one abstinent recreational users of illicit drugs comprised the participant sample, with 33 females and 28 males, mean age 25.45 years. The three testing sessions involved oral consumption of 100 mg MDMA, 0.42 mg/kg d-methamphetamine, or a matching placebo. The drug administration was counter-balanced, double-blind, and medically supervised. Cognitive performance was assessed during drug peak (3 h) and at 24 h post-dosing time-points. Blood samples were also taken to quantify the levels of drug present at the cognitive testing time-points. Blood concentrations of both methamphetamine and MDMA at drug peak samples were consistent with levels observed in previous studies. The major findings concern poorer performance in the MDMA condition at peak concentration for the trail-making measures and an index of working memory (trend level), and more accurate performance on a choice reaction task within the methamphetamine condition. Most of the differences in performance between the MDMA, methamphetamine, and placebo treatments diminished by the 24-h testing time-point, although some performance improvements subsisted for choice reaction time for the methamphetamine condition. Further research into the acute effects of amphetamine preparations is necessary to further quantify the acute disruption of aspects of human functioning crucial to complex activities such as attention, selective memory, and psychomotor performance.
Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range
NASA Technical Reports Server (NTRS)
Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.
Long-term purity assessment in succinonitrile
NASA Astrophysics Data System (ADS)
Rubinstein, E. R.; Tirmizi, S. H.; Glicksman, M. E.
1990-11-01
Container materials for crystal growth chambers must be carefully selected in order to prevent sample contamination. To address the issue of contamination, high purity SCN was exposed to a variety of potential chamber construction materials, e.g., metal alloys, soldering materials, and sealants, at a temperature approximately 25 K above the melting point of SCN (58°C), over periods of up to one year. Acceptability, or lack thereof, of candidate chamber materials was determined by performing periodic melting point checks of the exposed samples. Those materials which did not measurably affect the melting point of SCN over a one-year period were considered to be chemically compatible and therefore eligible for use in constructing the flight chamber. A growth chamber constructed from compatible materials (304 SS and borosilicate glass) was filled with pure SCN. A thermistor probe placed within the chamber permitted in situ measurement of the melting point and, indirectly, of the purity of the SCN. Melting point plateaus were then determined, to assess the actual chamber performance.
LeBlanc, Denis R.
2003-01-01
Diffusion samplers and temporary drive points were used to test for ordnance-related compounds in ground water discharging to Snake Pond near Camp Edwards at the Massachusetts Military Reservation, Cape Cod, MA. The contamination resulted from artillery use and weapons testing at various ranges upgradient of the pond.The diffusion samplers were constructed with a high-grade cellulose membrane that allowed diffusion of explosive compounds, such as RDX (Hexahydro-1,3,5-trinitro-1,3,5-triazine) and HMX (Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine), into deionized water inside the samplers. Laboratory tests confirmed that the cellulose membrane was permeable to RDX and HMX. One transect of 22 diffusion samplers was installed and retrieved in August-September 2001, and 12 transects with a total of 108 samplers were installed and retrieved in September-October 2001. The diffusion samplers were buried about 0.5 feet into the pond-bottom sediments by scuba divers and allowed to equilibrate with the ground water beneath the pond bottom for 13 to 27 days before retrieval. Water samples were collected from temporary well points driven about 2-4 feet into the pond bottom at 21 sites in December 2001 and March 2002 for analysis of explosives and perchlorate to confirm the diffusion-sampling results. The water samples from the diffusion samplers exhibited numerous chromatographic peaks, but evaluation of the photo-diode-array spectra indicated that most of the peaks did not represent the target compounds. The peaks probably are associated with natural organic compounds present in the soft, organically enriched pond-bottom sediments. The presence of four explosive compounds at five widely spaced sites was confirmed by the photo-diode-array analysis, but the compounds are not generally found in contaminated ground water near the ranges. No explosives were detected in water samples obtained from the drive points. Perchlorate was detected at less than 1 microgram per liter in two drive-point samples collected at the same site on two dates about 3 months apart. The source of the perchlorate in the samples could not be related directly to other contamination from Camp Edwards with the available information. The results from the diffusion and drive-point sampling do not indicate an area of ground-water discharge with concentrations of the ordnance-related compounds that are sufficiently elevated to be detected by these sampling methods. The diffusion and drive-point sampling data cannot be interpreted further without additional information concerning the pattern of ground-water flow at Snake Pond and the distributions of RDX, HMX, and perchlorate in ground water in the aquifer near the pond.
CCOMP: An efficient algorithm for complex roots computation of determinantal equations
NASA Astrophysics Data System (ADS)
Zouros, Grigorios P.
2018-01-01
In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.
Contamination source review for Building E3162, Edgewood Area, Aberdeen Proving Ground, Maryland
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, G.A.; Draugelis, A.K.; Rueda, J.
1995-09-01
This report was prepared by Argonne National Laboratory (ANL) to document the results of a contamination source review for Building E3162 at the Aberdeen Proving Ground (APG) in Maryland. The report may be used to assist the US Army in planning for the future use or disposition of this building. The review included a historical records search, physical inspection, photographic documentation, geophysical investigation, and collection of air samples. The field investigations were performed by ANL during 1994 and 1995. Building E3162 (APG designation) is part of the Medical Research Laboratories Building E3160 Complex. This research laboratory complex is located westmore » of Kings Creek, east of the airfield and Ricketts Point Road, and south of Kings Creek Road in the Edgewood Area of APG. The original structures in the E3160 Complex were constructed during World War 2. The complex was originally used as a medical research laboratory. Much of the research involved wound assessment involving chemical warfare agents. Building E3162 was used as a holding and study area for animals involved in non-agent burns. The building was constructed in 1952, placed on inactive status in 1983, and remains unoccupied. Analytical results from these air samples revealed no distinguishable difference in hydrocarbon and chlorinated solvent levels between the two background samples and the sample taken inside Building E3162.« less
Rodrigues, Valdemir; Estrany, Joan; Ranzini, Mauricio; de Cicco, Valdir; Martín-Benito, José Mª Tarjuelo; Hedo, Javier; Lucas-Borja, Manuel E
2018-05-01
Stream water quality is controlled by the interaction of natural and anthropogenic factors over a range of temporal and spatial scales. Among these anthropogenic factors, land cover changes at catchment scale can affect stream water quality. This work aims to evaluate the influence of land use and seasonality on stream water quality in a representative tropical headwater catchment named as Córrego Água Limpa (Sao Paulo, Brasil), which is highly influenced by intensive agricultural activities and urban areas. Two systematic sampling approach campaigns were implemented with six sampling points along the stream of the headwater catchment to evaluate water quality during the rainy and dry seasons. Three replicates were collected at each sampling point in 2011. Electrical conductivity, nitrates, nitrites, sodium superoxide, Chemical Oxygen Demand (DQO), colour, turbidity, suspended solids, soluble solids and total solids were measured. Water quality parameters differed among sampling points, being lower at the headwater sampling point (0m above sea level), and then progressively higher until the last downstream sampling point (2500m above sea level). For the dry season, the mean discharge was 39.5ls -1 (from April to September) whereas 113.0ls -1 were averaged during the rainy season (from October to March). In addition, significant temporal and spatial differences were observed (P<0.05) for the fourteen parameters during the rainy and dry period. The study enhance significant relationships among land use and water quality and its temporal effect, showing seasonal differences between the land use and water quality connection, highlighting the importance of multiple spatial and temporal scales for understanding the impacts of human activities on catchment ecosystem services. Copyright © 2017 Elsevier B.V. All rights reserved.
Round-robin study of arsenic implant dose measurement in silicon by SIMS
NASA Astrophysics Data System (ADS)
Simons, D.; Kim, K.; Benbalagh, R.; Bennett, J.; Chew, A.; Gehre, D.; Hasegawa, T.; Hitzman, C.; Ko, J.; Lindstrom, R.; MacDonald, B.; Magee, C.; Montgomery, N.; Peres, P.; Ronsheim, P.; Yoshikawa, S.; Schuhmacher, M.; Stockwell, W.; Sykes, D.; Tomita, M.; Toujou, F.; Won, J.
2006-07-01
An international round-robin study was undertaken under the auspices of ISO TC201/SC6 to determine the best analytical conditions and the level of interlaboratory agreement for the determination of the implantation dose of arsenic in silicon by secondary ion mass spectrometry (SIMS). Fifteen SIMS laboratories, as well as two laboratories that performed low energy electron-induced X-ray emission spectrometry (LEXES) and one that made measurements by instrumental neutron activation analysis (INAA) were asked to determine the implanted arsenic doses in three unknown samples using as a comparator NIST Standard Reference Material ® 2134. The use of a common reference material by all laboratories resulted in better interlaboratory agreement than was seen in a previous round-robin that lacked a common comparator. The relative standard deviation among laboratories was less than 4% for the medium-dose sample, but several percent larger for the low- and high-dose samples. The high-dose sample showed a significant difference between point-by-point and average matrix normalization because the matrix signal decreased in the vicinity of the implant peak, as observed in a previous study. The dose from point-by-point normalization was in close agreement with that determined by INAA. No clear difference in measurement repeatability was seen when comparing Si 2- and Si 3- as matrix references with AsSi -.
Van Zijl, Magdalena Catherina; Aneck-Hahn, Natalie Hildegard; Swart, Pieter; Hayward, Stefan; Genthe, Bettina; De Jager, Christiaan
2017-11-01
Endocrine disrupting chemicals (EDCs) are ubiquitous in the environment and have been detected in drinking water from various countries. Although various water treatment processes can remove EDCs, chemicals can also migrate from pipes that transport water and contaminate drinking water. This study investigated the estrogenic activity in drinking water from various distribution points in Pretoria (City of Tshwane) (n = 40) and Cape Town (n = 40), South Africa, using the recombinant yeast estrogen screen (YES) and the T47D-KBluc reporter gene assay. The samples were collected seasonally over four sampling periods. The samples were also analysed for bisphenol A (BPA), nonylphenol (NP), di(2-ethylhexyl) adipate (DEHA), dibutyl phthalate (DBP), di(2-ethylhexyl) phthalate (DEHP), diisononylphthalate (DINP), 17β-estradiol (E 2 ), estrone (E 1 ) and ethynylestradiol (EE 2 ) using ultra-performance liquid chromatography-tandem mass spectrophotometry (UPLC-MS/MS). This was followed by a scenario based health risk assessment to assess the carcinogenic and toxic human health risks associated with the consumption of distribution point water. None of the water extracts from the distribution points were above the detection limit in the YES bioassay, but the EEq values ranged from 0.002 to 0.114 ng/L using the T47D-KBluc bioassay. BPA, DEHA, DBP, DEHP, DINP E 1 , E 2, and EE 2 were detected in distribution point water samples. NP was below the detection limit for all the samples. The estrogenic activity and levels of target chemicals were comparable to the levels found in other countries. Overall the health risk assessment revealed acceptable health and carcinogenic risks associated with the consumption of distribution point water. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fermi Level Control of Point Defects During Growth of Mg-Doped GaN
NASA Astrophysics Data System (ADS)
Bryan, Zachary; Hoffmann, Marc; Tweedie, James; Kirste, Ronny; Callsen, Gordon; Bryan, Isaac; Rice, Anthony; Bobea, Milena; Mita, Seiji; Xie, Jinqiao; Sitar, Zlatko; Collazo, Ramón
2013-05-01
In this study, Fermi level control of point defects during metalorganic chemical vapor deposition (MOCVD) of Mg-doped GaN has been demonstrated by above-bandgap illumination. Resistivity and photoluminescence (PL) measurements are used to investigate the Mg dopant activation of samples with Mg concentration of 2 × 1019 cm-3 grown with and without exposure to ultraviolet (UV) illumination. Samples grown under UV illumination have five orders of magnitude lower resistivity values compared with typical unannealed GaN:Mg samples. The PL spectra of samples grown with UV exposure are similar to the spectra of those grown without UV exposure that were subsequently annealed, indicating a different incorporation of compensating defects during growth. Based on PL and resistivity measurements we show that Fermi level control of point defects during growth of III-nitrides is feasible.
Human performance on the traveling salesman problem.
MacGregor, J N; Ormerod, T
1996-05-01
Two experiments on performance on the traveling salesman problem (TSP) are reported. The TSP consists of finding the shortest path through a set of points, returning to the origin. It appears to be an intransigent mathematical problem, and heuristics have been developed to find approximate solutions. The first experiment used 10-point, the second, 20-point problems. The experiments tested the hypothesis that complexity of TSPs is a function of number of nonboundary points, not total number of points. Both experiments supported the hypothesis. The experiments provided information on the quality of subjects' solutions. Their solutions clustered close to the best known solutions, were an order of magnitude better than solutions produced by three well-known heuristics, and on average fell beyond the 99.9th percentile in the distribution of random solutions. The solution process appeared to be perceptually based.
A method of PSF generation for 3D brightfield deconvolution.
Tadrous, P J
2010-02-01
This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.
Quadtree of TIN: a new algorithm of dynamic LOD
NASA Astrophysics Data System (ADS)
Zhang, Junfeng; Fei, Lifan; Chen, Zhen
2009-10-01
Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.
Capturing rogue waves by multi-point statistics
NASA Astrophysics Data System (ADS)
Hadjihosseini, A.; Wächter, Matthias; Hoffmann, N. P.; Peinke, J.
2016-01-01
As an example of a complex system with extreme events, we investigate ocean wave states exhibiting rogue waves. We present a statistical method of data analysis based on multi-point statistics which for the first time allows the grasping of extreme rogue wave events in a highly satisfactory statistical manner. The key to the success of the approach is mapping the complexity of multi-point data onto the statistics of hierarchically ordered height increments for different time scales, for which we can show that a stochastic cascade process with Markov properties is governed by a Fokker-Planck equation. Conditional probabilities as well as the Fokker-Planck equation itself can be estimated directly from the available observational data. With this stochastic description surrogate data sets can in turn be generated, which makes it possible to work out arbitrary statistical features of the complex sea state in general, and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics.
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Lungu, Claudiu N; Diudea, Mircea V
2018-01-01
Lipid II, a peptidoglycan, is a precursor in bacterial cell synthesis. It has both hydrophilic and lipophilic properties. The molecule translocates a bacterial membrane to deliver and incorporate "building blocks" from disaccharide-pentapeptide into the peptidoglican wall. Lipid II is a valid antibiotic target. A receptor binding pocket may be occupied by a ligand in various plausible conformations, among which only few ones are energetically related to a biological activity in the physiological efficiency domain. This paper reports the mapping of the conformational space of Lipid II in its interaction with Teixobactin and other Lipid II ligands. In order to study computationally the complex between Lipid II and ligands, a docking study was first carried on. Docking site was retrieved form literature. After docking, 5 ligand conformations and further 5 complexes (denoted 00 to 04) for each molecule were taken into account. For each structure, conformational studies were performed. Statistical analysis, conformational analysis and molecular dynamics based clustering were used to predict the potency of these compounds. A score for potency prediction was developed. Appling lipid II classification according to Lipid II conformational energy, a conformation of Teixobactin proved to be energetically favorable, followed by Oritravicin, Dalbavycin, Telvanicin, Teicoplamin and Vancomycin, respectively. Scoring of molecules according to cluster band and PCA produced the same result. Molecules classified according to standard deviations showed Dalbavycin as the most favorable conformation, followed by Teicoplamin, Telvanicin, Teixobactin, Oritravicin and Vancomycin, respectively. Total score showing best energetic efficiency of complex formation shows Teixobactin to have the best conformation (a score of 15 points) followed by Dalbavycin (14 points), Oritravicin (12v points), Telvanicin (10 points), Teicoplamin (9 points), Vancomycin (3 points). Statistical analysis of conformations can be used to predict the efficiency of ligand - target interaction and consecutively to find insight regarding ligand potency and postulate about favorable conformation of ligand and binding site. In this study it was shown that Teixobactin is more efficient in binding with Lipid II compared to Vancomycin, results confirmed by experimental data reported in literature. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging
Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin
2018-01-01
Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo. The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential (BP) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations. PMID:29675325
Ye, Xin; Xu, Jin; Lu, Lijuan; Li, Xinxin; Fang, Xueen; Kong, Jilie
2018-08-14
The use of paper-based methods for clinical diagnostics is a rapidly expanding research topic attracting a great deal of interest. Some groups have attempted to realize an integrated nucleic acid test on a single microfluidic paper chip, including extraction, amplification, and readout functions. However, these studies were not able to overcome complex modification and fabrication requirements, long turn-around times, or the need for sophisticated equipment like pumps, thermal cyclers, or centrifuges. Here, we report an extremely simple paper-based test for the point-of-care diagnosis of rotavirus A, one of the most common pathogens that causes pediatric gastroenteritis. This paper-based test could perform nucleic acid extraction within 5 min, then took 25 min to amplify the target sequence, and the result was visible to the naked eye immediately afterward or quantitative by the UV-Vis absorbance. This low-cost method does not require extra equipment and is easy to use either in a lab or at the point-of-care. The detection limit for rotavirus A was found to be 1 × 10 3 copies/mL. In addition, 100% sensitivity and specificity were achieved when testing 48 clinical stool samples. In conclusion, the present paper-based test fulfills the main requirements for a point-of-care diagnostic tool, and has the potential to be applied to disease prevention, control, and precision diagnosis. Copyright © 2018 Elsevier B.V. All rights reserved.
Electric field effects on a near-critical fluid in microgravity
NASA Technical Reports Server (NTRS)
Zimmerli, G.; Wilkinson, R. A.; Ferrell, R. A.; Hao, H.; Moldover, M. R.
1994-01-01
The effects of an electric field on a sample of SF6 fluid in the vicinity of the liquid-vapor critical point is studied. The isothermal increase of the density of a near-critical sample as a function of the applied electric field was measured. In agreement with theory, this electrostriction effect diverges near the critical point as the isothermal compressibility diverges. Also as expected, turning on the electric field in the presence of density gradients can induce flow within the fluid, in a way analogous to turning on gravity. These effects were observed in a microgravity environment by using the Critical Point Facility which flew onboard the Space Shuttle Columbia in July 1994 as part of the Second International Microgravity Laboratory Mission. Both visual and interferometric images of two separate sample cells were obtained by means of video downlink. The interferometric images provided quantitative information about the density distribution throughout the sample. The electric field was generated by applying 500 Volts to a fine wire passing through the critical fluid.
Field programmable gate array-assigned complex-valued computation and its limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com; Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien; Zwick, Wolfgang
We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
Determination of arsenic species in rice samples using CPE and ETAAS.
Costa, Bruno Elias Dos Santos; Coelho, Nívia Maria Melo; Coelho, Luciana Melo
2015-07-01
A highly sensitive and selective procedure for the determination of arsenate and total arsenic in food by electrothermal atomic absorption spectrometry after cloud point extraction (ETAAS/CPE) was developed. The procedure is based on the formation of a complex of As(V) ions with molybdate in the presence of 50.0 mmol L(-1) sulfuric acid. The complex was extracted into the surfactant-rich phase of 0.06% (w/v) Triton X-114. The variables affecting the complex formation, extraction and phase separation were optimized using factorial designs. Under the optimal conditions, the calibration graph was linear in the range of 0.05-10.0 μg L(-1). The detection and quantification limits were 10 and 33 ng L(-1), respectively and the corresponding value for the relative standard deviation for 10 replicates was below 5%. Recovery values of between 90.8% and 113.1% were obtained for spiked samples. The accuracy of the method was evaluated by comparison with the results obtained for the analysis of a rice flour sample (certified material IRMM-804) and no significant difference at the 95% confidence level was observed. The method was successfully applied to the determination of As(V) and total arsenic in rice samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
On constraining pilot point calibration with regularization in PEST
Fienen, M.N.; Muffels, C.T.; Hunt, R.J.
2009-01-01
Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.
Mancini, John S; Bowman, Joel M
2013-03-28
We report a global, full-dimensional, ab initio potential energy surface describing the HCl-H2O dimer. The potential is constructed from a permutationally invariant fit, using Morse-like variables, to over 44,000 CCSD(T)-F12b∕aug-cc-pVTZ energies. The surface describes the complex and dissociated monomers with a total RMS fitting error of 24 cm(-1). The normal modes of the minima, low-energy saddle point and separated monomers, the double minimum isomerization pathway and electronic dissociation energy are accurately described by the surface. Rigorous quantum mechanical diffusion Monte Carlo (DMC) calculations are performed to determine the zero-point energy and wavefunction of the complex and the separated fragments. The calculated zero-point energies together with a De value calculated from CCSD(T) with a complete basis set extrapolation gives a D0 value of 1348 ± 3 cm(-1), in good agreement with the recent experimentally reported value of 1334 ± 10 cm(-1) [B. E. Casterline, A. K. Mollner, L. C. Ch'ng, and H. Reisler, J. Phys. Chem. A 114, 9774 (2010)]. Examination of the DMC wavefunction allows for confident characterization of the zero-point geometry to be dominant at the C(2v) double-well saddle point and not the C(s) global minimum. Additional support for the delocalized zero-point geometry is given by numerical solutions to the 1D Schrödinger equation along the imaginary-frequency out-of-plane bending mode, where the zero-point energy is calculated to be 52 cm(-1) above the isomerization barrier. The D0 of the fully deuterated isotopologue is calculated to be 1476 ± 3 cm(-1), which we hope will stand as a benchmark for future experimental work.
Research on infrared dim-point target detection and tracking under sea-sky-line complex background
NASA Astrophysics Data System (ADS)
Dong, Yu-xing; Li, Yan; Zhang, Hai-bo
2011-08-01
Target detection and tracking technology in infrared image is an important part of modern military defense system. Infrared dim-point targets detection and recognition under complex background is a difficulty and important strategic value and challenging research topic. The main objects that carrier-borne infrared vigilance system detected are sea-skimming aircrafts and missiles. Due to the characteristics of wide field of view of vigilance system, the target is usually under the sea clutter. Detection and recognition of the target will be taken great difficulties .There are some traditional point target detection algorithms, such as adaptive background prediction detecting method. When background has dispersion-decreasing structure, the traditional target detection algorithms would be more useful. But when the background has large gray gradient, such as sea-sky-line, sea waves etc .The bigger false-alarm rate will be taken in these local area .It could not obtain satisfactory results. Because dim-point target itself does not have obvious geometry or texture feature ,in our opinion , from the perspective of mathematics, the detection of dim-point targets in image is about singular function analysis .And from the perspective image processing analysis , the judgment of isolated singularity in the image is key problem. The foregoing points for dim-point targets detection, its essence is a separation of target and background of different singularity characteristics .The image from infrared sensor usually accompanied by different kinds of noise. These external noises could be caused by the complicated background or from the sensor itself. The noise might affect target detection and tracking. Therefore, the purpose of the image preprocessing is to reduce the effects from noise, also to raise the SNR of image, and to increase the contrast of target and background. According to the low sea-skimming infrared flying small target characteristics , the median filter is used to eliminate noise, improve signal-to-noise ratio, then the multi-point multi-storey vertical Sobel algorithm will be used to detect the sea-sky-line ,so that we can segment sea and sky in the image. Finally using centroid tracking method to capture and trace target. This method has been successfully used to trace target under the sea-sky complex background.
A QRS Detection and R Point Recognition Method for Wearable Single-Lead ECG Devices.
Chen, Chieh-Li; Chuang, Chun-Te
2017-08-26
In the new-generation wearable Electrocardiogram (ECG) system, signal processing with low power consumption is required to transmit data when detecting dangerous rhythms and to record signals when detecting abnormal rhythms. The QRS complex is a combination of three of the graphic deflection seen on a typical ECG. This study proposes a real-time QRS detection and R point recognition method with low computational complexity while maintaining a high accuracy. The enhancement of QRS segments and restraining of P and T waves are carried out by the proposed ECG signal transformation, which also leads to the elimination of baseline wandering. In this study, the QRS fiducial point is determined based on the detected crests and troughs of the transformed signal. Subsequently, the R point can be recognized based on four QRS waveform templates and preliminary heart rhythm classification can be also achieved at the same time. The performance of the proposed approach is demonstrated using the benchmark of the MIT-BIH Arrhythmia Database, where the QRS detected sensitivity (Se) and positive prediction (+P) are 99.82% and 99.81%, respectively. The result reveals the approach's advantage of low computational complexity, as well as the feasibility of the real-time application on a mobile phone and an embedded system.
Dennis M. May
1988-01-01
This report presents the procedures by which the Southern Forest Inventory and Analysis unit estimates forest growth from permanent horizontal point samples. Inventory data from the 1977-87 survey of Mississippi's north unit were used to demonstrate how trees on the horizontal point samples are classified into one of eight components of growth and, in turn, how...
NASA Astrophysics Data System (ADS)
Wang, Hongrui; Fang, Wei; Li, Huiduan
2015-04-01
Solar driving mechanism for Earth climate has been a controversial problem for centuries. Long-time data of solar activity is required by the investigations of the solar driving mechanism, such as Total Solar Irradiance (TSI) record. Three Total Solar Irradiance Monitors (TSIM) have been developed by Changchun Institute of Optics, Fine Mechanics and Physics for China Meteorological Administration to maintain continuities of TSI data series which lasted for nearly 4 decades.The newest TSIM has recorded TSI daily with accurate solar pointing on the FY-3C meteorological satellite since Oct 2013. TSIM/FY-3C has a pointing system for automatic solar tracking, onboard the satellite designed mainly for Earth observing. Most payloads of FY-3C are developed for observation of land, ocean and atmosphere. Consequently, the FY-3C satellite is a nadir-pointing spacecraft with its z axis to be pointed at the center of the Earth. Previous TSIMs onboard the FY-3A and FY-3B satellites had no pointing system, solar observations were only performed when the sun swept through field-of-view of the instruments. And TSI measurements are influenced inevitably by the solar pointing errors. Corrections of the solar pointing errors were complex. The problem is now removed by TSIM/FY-3C.TSIM/FY-3C follows the sun accurately by itself using its pointing system based on scheme of visual servo control. The pointing system is consisted of a radiometer package, two motors for solar tracking, a sun sensor and etc. TSIM/FY-3C has made daily observations of TSI for more than one year, with nearly zero solar pointing errors. Short time-scale variations in TSI detected by TSIM/FY-3C are nearly the same with VIRGO/SOHO and TIM/SORCE.Instrument details, primary results of solar pointing control, solar observations and etc will be given in the presentation.
Binary Colloidal Alloy Test-3 and 4: Critical Point
NASA Technical Reports Server (NTRS)
Weitz, David A.; Lu, Peter J.
2007-01-01
Binary Colloidal Alloy Test - 3 and 4: Critical Point (BCAT-3-4-CP) will determine phase separation rates and add needed points to the phase diagram of a model critical fluid system. Crewmembers photograph samples of polymer and colloidal particles (tiny nanoscale spheres suspended in liquid) that model liquid/gas phase changes. Results will help scientists develop fundamental physics concepts previously cloaked by the effects of gravity.
Standard Samples and Reference Standards Issued by the National Bureau of Standards
1954-08-31
precision and accuracy of control testing in the melting - point , density, index of refraction, heat rubber industry. of combustion, color, and gloss...pH (approx.) 1.7 65 2.50 Melting - Point Standards 44d Aluminum ---------------------------- 659.70 C...calculating the best frequencies for communication between any two points in the world at any time during the given month. The data are important to all
Adaptive 4d Psi-Based Change Detection
NASA Astrophysics Data System (ADS)
Yang, Chia-Hsiang; Soergel, Uwe
2018-04-01
In a previous work, we proposed a PSI-based 4D change detection to detect disappearing and emerging PS points (3D) along with their occurrence dates (1D). Such change points are usually caused by anthropic events, e.g., building constructions in cities. This method first divides an entire SAR image stack into several subsets by a set of break dates. The PS points, which are selected based on their temporal coherences before or after a break date, are regarded as change candidates. Change points are then extracted from these candidates according to their change indices, which are modelled from their temporal coherences of divided image subsets. Finally, we check the evolution of the change indices for each change point to detect the break date that this change occurred. The experiment validated both feasibility and applicability of our method. However, two questions still remain. First, selection of temporal coherence threshold associates with a trade-off between quality and quantity of PS points. This selection is also crucial for the amount of change points in a more complex way. Second, heuristic selection of change index thresholds brings vulnerability and causes loss of change points. In this study, we adapt our approach to identify change points based on statistical characteristics of change indices rather than thresholding. The experiment validates this adaptive approach and shows increase of change points compared with the old version. In addition, we also explore and discuss optimal selection of temporal coherence threshold.
NASA Astrophysics Data System (ADS)
Ma, W.; Jafarpour, B.
2017-12-01
We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
Leaps and lulls in the developmental transcriptome of Dictyostelium discoideum.
Rosengarten, Rafael David; Santhanam, Balaji; Fuller, Danny; Katoh-Kurasawa, Mariko; Loomis, William F; Zupan, Blaz; Shaulsky, Gad
2015-04-13
Development of the soil amoeba Dictyostelium discoideum is triggered by starvation. When placed on a solid substrate, the starving solitary amoebae cease growth, communicate via extracellular cAMP, aggregate by tens of thousands and develop into multicellular organisms. Early phases of the developmental program are often studied in cells starved in suspension while cAMP is provided exogenously. Previous studies revealed massive shifts in the transcriptome under both developmental conditions and a close relationship between gene expression and morphogenesis, but were limited by the sampling frequency and the resolution of the methods. Here, we combine the superior depth and specificity of RNA-seq-based analysis of mRNA abundance with high frequency sampling during filter development and cAMP pulsing in suspension. We found that the developmental transcriptome exhibits mostly gradual changes interspersed by a few instances of large shifts. For each time point we treated the entire transcriptome as single phenotype, and were able to characterize development as groups of similar time points separated by gaps. The grouped time points represented gradual changes in mRNA abundance, or molecular phenotype, and the gaps represented times during which many genes are differentially expressed rapidly, and thus the phenotype changes dramatically. Comparing developmental experiments revealed that gene expression in filter developed cells lagged behind those treated with exogenous cAMP in suspension. The high sampling frequency revealed many genes whose regulation is reproducibly more complex than indicated by previous studies. Gene Ontology enrichment analysis suggested that the transition to multicellularity coincided with rapid accumulation of transcripts associated with DNA processes and mitosis. Later development included the up-regulation of organic signaling molecules and co-factor biosynthesis. Our analysis also demonstrated a high level of synchrony among the developing structures throughout development. Our data describe D. discoideum development as a series of coordinated cellular and multicellular activities. Coordination occurred within fields of aggregating cells and among multicellular bodies, such as mounds or migratory slugs that experience both cell-cell contact and various soluble signaling regimes. These time courses, sampled at the highest temporal resolution to date in this system, provide a comprehensive resource for studies of developmental gene expression.
Mapping Urban Tree Canopy Cover Using Fused Airborne LIDAR and Satellite Imagery Data
NASA Astrophysics Data System (ADS)
Parmehr, Ebadat G.; Amati, Marco; Fraser, Clive S.
2016-06-01
Urban green spaces, particularly urban trees, play a key role in enhancing the liveability of cities. The availability of accurate and up-to-date maps of tree canopy cover is important for sustainable development of urban green spaces. LiDAR point clouds are widely used for the mapping of buildings and trees, and several LiDAR point cloud classification techniques have been proposed for automatic mapping. However, the effectiveness of point cloud classification techniques for automated tree extraction from LiDAR data can be impacted to the point of failure by the complexity of tree canopy shapes in urban areas. Multispectral imagery, which provides complementary information to LiDAR data, can improve point cloud classification quality. This paper proposes a reliable method for the extraction of tree canopy cover from fused LiDAR point cloud and multispectral satellite imagery data. The proposed method initially associates each LiDAR point with spectral information from the co-registered satellite imagery data. It calculates the normalised difference vegetation index (NDVI) value for each LiDAR point and corrects tree points which have been misclassified as buildings. Then, region growing of tree points, taking the NDVI value into account, is applied. Finally, the LiDAR points classified as tree points are utilised to generate a canopy cover map. The performance of the proposed tree canopy cover mapping method is experimentally evaluated on a data set of airborne LiDAR and WorldView 2 imagery covering a suburb in Melbourne, Australia.
Heuristic-driven graph wavelet modeling of complex terrain
NASA Astrophysics Data System (ADS)
Cioacǎ, Teodor; Dumitrescu, Bogdan; Stupariu, Mihai-Sorin; Pǎtru-Stupariu, Ileana; Nǎpǎrus, Magdalena; Stoicescu, Ioana; Peringer, Alexander; Buttler, Alexandre; Golay, François
2015-03-01
We present a novel method for building a multi-resolution representation of large digital surface models. The surface points coincide with the nodes of a planar graph which can be processed using a critically sampled, invertible lifting scheme. To drive the lazy wavelet node partitioning, we employ an attribute aware cost function based on the generalized quadric error metric. The resulting algorithm can be applied to multivariate data by storing additional attributes at the graph's nodes. We discuss how the cost computation mechanism can be coupled with the lifting scheme and examine the results by evaluating the root mean square error. The algorithm is experimentally tested using two multivariate LiDAR sets representing terrain surface and vegetation structure with different sampling densities.
Fleming, Denise H; Mathew, Binu S; Prasanna, Samuel; Annapandian, Vellaichamy M; John, George T
2011-04-01
Enteric-coated mycophenolate sodium (EC-MPS) is widely used in renal transplantation. With a delayed absorption profile, it has not been possible to develop limited sampling strategies to estimate area under the curve (mycophenolic acid [MPA] AUC₀₋₁₂), which have limited time points and are completed in 2 hours. We developed and validated simplified strategies to estimate MPA AUC₀₋₁₂ in an Indian renal transplant population prescribed EC-MPS together with prednisolone and tacrolimus. Intensive pharmacokinetic sampling (17 samples each) was performed in 18 patients to measure MPA AUC₀₋₁₂. The profiles at 1 month were used to develop the simplified strategies and those at 5.5 months used for validation. We followed two approaches. In one, the AUC was calculated using the trapezoidal rule with fewer time points followed by an extrapolation. In the second approach, by stepwise multiple regression analysis, models with different time points were identified and linear regression analysis performed. Using the trapezoidal rule, two equations were developed with six time points and sampling to 6 or 8 hours (8hrAUC[₀₋₁₂exp]) after the EC-MPS dose. On validation, the 8hrAUC(₀₋₁₂exp) compared with total measured AUC₀₋₁₂ had a coefficient of correlation (r²) of 0.872 with a bias and precision (95% confidence interval) of 0.54% (-6.07-7.15) and 9.73% (5.37-14.09), respectively. Second, limited sampling strategies were developed with four, five, six, seven, and eight time points and completion within 2 hours, 4 hours, 6 hours, and 8 hours after the EC-MPS dose. On validation, six, seven, and eight time point equations, all with sampling to 8 hours, had an acceptable r with the total measured MPA AUC₀₋₁₂ (0.817-0.927). In the six, seven, and eight time points, the bias (95% confidence interval) was 3.00% (-4.59 to 10.59), 0.29% (-5.4 to 5.97), and -0.72% (-5.34 to 3.89) and the precision (95% confidence interval) was 10.59% (5.06-16.13), 8.33% (4.55-12.1), and 6.92% (3.94-9.90), respectively. Of the eight simplified approaches, inclusion of seven or eight time points improved the accuracy of the predicted AUC compared with the actual and can be advocated based on the priority of the user.
Detecting recurrence domains of dynamical systems by symbolic dynamics.
beim Graben, Peter; Hutt, Axel
2013-04-12
We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.
Non-uniform sampling: post-Fourier era of NMR data collection and processing.
Kazimierczuk, Krzysztof; Orekhov, Vladislav
2015-11-01
The invention of multidimensional techniques in the 1970s revolutionized NMR, making it the general tool of structural analysis of molecules and materials. In the most straightforward approach, the signal sampling in the indirect dimensions of a multidimensional experiment is performed in the same manner as in the direct dimension, i.e. with a grid of equally spaced points. This results in lengthy experiments with a resolution often far from optimum. To circumvent this problem, numerous sparse-sampling techniques have been developed in the last three decades, including two traditionally distinct approaches: the radial sampling and non-uniform sampling. This mini review discusses the sparse signal sampling and reconstruction techniques from the point of view of an underdetermined linear algebra problem that arises when a full, equally spaced set of sampled points is replaced with sparse sampling. Additional assumptions that are introduced to solve the problem, as well as the shape of the undersampled Fourier transform operator (visualized as so-called point spread function), are shown to be the main differences between various sparse-sampling methods. Copyright © 2015 John Wiley & Sons, Ltd.
Influence of Hydrogen Bond on Thermal and Phase Transitions of Binary Complex Liquid Crystals
NASA Astrophysics Data System (ADS)
Vijayakumar, V. N.; Rajasekaran, T. R.; Baskar, K.
2017-12-01
A novel supramolecular liquid crystal (LC) is synthesized from the binary complex of 4-decyloxy benzoic acid and cholesteryl acetate. Fourier transform infrared (FTIR) spectroscopic study confirms the formation of intermolecular hydrogen bond between the mesogens. Various mesophases and corresponding textural changes in the complex are observed by comparing with its constituents through polarizing optical microscopic (POM) studies. The thermal stability factor of smectic phase for present complex is calculated. An interesting observation of present work is that investigation of extended thermal span of mesomorphic phases, decreased enthalpy, a nematic phase with a high clearing point and a low melting point. This is due to an arrangement of molecular reorientations and the development of new associations by hydrogen bonding. Optical tilt angle for smectic C phase is determined and the same is fitted to a power law.
Spayd, Steven E.; Robson, Mark G.; Buckley, Brian T.
2014-01-01
A comparison of the effectiveness of whole house (point-of-entry) and point-of-use arsenic water treatment systems in reducing arsenic exposure from well water was conducted. The non-randomized observational study recruited 49 subjects having elevated arsenic in their residential home well water in New Jersey. The subjects obtained either point-of-entry or point-of-use arsenic water treatment. Prior ingestion exposure to arsenic in well water was calculated by measuring arsenic concentrations in the well water and obtaining water-use histories for each subject, including years of residence with the current well and amount of water consumed from the well per day. A series of urine samples were collected from the subjects, some starting before water treatment was installed and continuing for at least nine months after treatment had begun. Urine samples were analyzed and speciated for inorganic-related arsenic concentrations. A two-phase clearance of inorganic-related arsenic from urine and the likelihood of a significant body burden from chronic exposure to arsenic in drinking water were identified. After nine months of water treatment the adjusted mean of the urinary inorganic-related arsenic concentrations were significantly lower (p < 0.0005) in the point-of-entry treatment group (2.5 μg/g creatinine) than in the point-of-use treatment group (7.2 μg/g creatinine). The results suggest that whole house arsenic water treatment systems provide a more effective reduction of arsenic exposure from well water than that obtained by point-of-use treatment. PMID:24975493
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas
2017-12-01
Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.
NASA Astrophysics Data System (ADS)
Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo
2018-06-01
The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.
Ter Haar, C Cato; Man, Sum-Che; Maan, Arie C; Schalij, Martin J; Swenne, Cees A
2016-01-01
When triaging a patient with acute chest pain at first medical contact, an electrocardiogram (ECG) is routinely made and inspected for signs of myocardial ischemia. The guidelines recommend comparison of the acute and an earlier-made ECG, when available. No concrete recommendations for this comparison exist, neither is known how to handle J-point identification difficulties. Here we present a J-point independent method for such a comparison. After conversion to vectorcardiograms, baseline and acute ischemic ECGs after 3minutes of balloon occlusion during elective PCI were compared in 81 patients of the STAFF III ECG database. Baseline vectorcardiograms were subtracted from ischemic vectorcardiograms using either the QRS onsets or the J points as synchronization instants, yielding vector magnitude difference signals, ΔH. Output variables for the J-point synchronized differences were ΔH at the actual J point and at 20, 40, 60 and 80ms thereafter. Output variables for the onset-QRS synchronized differences were the ΔH at 80, 100, 120, 140 and 160ms after onset QRS. Finally, linear regressions of all combinations of ΔHJ+… versus ΔHQRS+… were made, and the best combination was identified. The highest correlation, 0.93 (p<0.01), was found between ΔH 40ms after the J point and 160ms after the onset of the QRS complex. With a ΔH ischemia threshold of 0.05mV, 66/81 (J-point synchronized differences) and 68/81 (onset-QRS synchronized differences) subjects were above the ischemia threshold, corresponding to sensitivities of 81% and 84%, respectively. Our current study opens an alternative way to detect cardiac ischemia without the need for human expertise for determination of the J point by measuring the difference vector magnitude at 160ms after the onset of the QRS complex. Copyright © 2016 Elsevier Inc. All rights reserved.
Mathematical construction and perturbation analysis of Zernike discrete orthogonal points.
Shi, Zhenguang; Sui, Yongxin; Liu, Zhenyu; Peng, Ji; Yang, Huaijiang
2012-06-20
Zernike functions are orthogonal within the unit circle, but they are not over the discrete points such as CCD arrays or finite element grids. This will result in reconstruction errors for loss of orthogonality. By using roots of Legendre polynomials, a set of points within the unit circle can be constructed so that Zernike functions over the set are discretely orthogonal. Besides that, the location tolerances of the points are studied by perturbation analysis, and the requirements of the positioning precision are not very strict. Computer simulations show that this approach provides a very accurate wavefront reconstruction with the proposed sampling set.
Habitat Complexity Metrics to Guide Restoration of Large Rivers
NASA Astrophysics Data System (ADS)
Jacobson, R. B.; McElroy, B. J.; Elliott, C.; DeLonay, A.
2011-12-01
Restoration strategies on large, channelized rivers typically strive to recover lost habitat complexity, based on the assumption complexity and biophysical capacity are directly related. Although definition of links between complexity and biotic responses can be tenuous, complexity metrics have appeal because of their potential utility in quantifying habitat quality, defining reference conditions and design criteria, and measuring restoration progress. Hydroacoustic instruments provide many ways to measure complexity on large rivers, yet substantive questions remain about variables and scale of complexity that are meaningful to biota, and how complexity can be measured and monitored cost effectively. We explore these issues on the Missouri River, using the example of channel re-engineering projects that are intended to aid in recovery of the pallid sturgeon, an endangered benthic fish. We are refining understanding of what habitat complexity means for adult fish by combining hydroacoustic habitat assessments with acoustic telemetry to map locations during reproductive migrations and spawning. These data indicate that migrating sturgeon select points with relatively low velocity but adjacent to areas of high velocity (that is, with high velocity gradients); the integration of points defines pathways which minimize energy expenditures during upstream migrations of 10's to 100's of km. Complexity metrics that efficiently quantify migration potential at the reach scale are therefore directly relevant to channel restoration strategies. We are also exploring complexity as it relates to larval sturgeon dispersal. Larvae may drift for as many as 17 days (100's of km at mean velocities) before using up their yolk sac, after which they "settle" into habitats where they initiate feeding. An assumption underlying channel re-engineering is that additional channel complexity, specifically increased shallow, slow water, is necessary for early feeding and refugia. Development of complexity metrics is complicated by the fact that characteristics of channel morphology may increase complexity scores without necessarily increasing biophysical capacity for target species. For example, a cross section that samples depths and velocities across the thalweg (navigation channel) and into lentic habitat may score high on most measures of hydraulic or geomorphic complexity, but does not necessarily provide habitats beneficial to native species. Complexity measures need to be bounded by best estimates of native species requirements. In the absence of specific information, creation of habitat complexity for the sake of complexity may lead to unintended consequences, for example, lentic habitats that increase a complexity score but support invasive species. An additional practical constraint on complexity measures is the need to develop metrics that are can be deployed cost-effectively in an operational monitoring program. Design of a monitoring program requires informed choices of measurement variables, definition of reference sites, and design of sampling effort to capture spatial and temporal variability.
The Effect of Applied Tensile Stress on Localized Corrosion in Sensitized AA5083
2015-09-01
of stainless steel 4-point bending rig used to apply elastic stress to aluminum plate samples. (Bottom) Stress- strain data based on displacement and...ASTM-G39, from [25]. ..........................20 Figure 13. Photograph of stainless steel 4-point bending rig used to apply elastic stress to...aluminum plate samples, from [8]. ....................................................20 Figure 14. Photograph of stainless steel 4-point bending rig
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Billen, R.
2017-08-01
Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.
RandomSpot: A web-based tool for systematic random sampling of virtual slides.
Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E
2015-01-01
This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.
Pharmaceutical applications using NIR technology in the cloud
NASA Astrophysics Data System (ADS)
Grossmann, Luiz; Borges, Marco A.
2017-05-01
NIR technology has been available for a long time, certainly more than 50 years. Without any doubt, it has found many niche applications, especially in the pharmaceutical, food, agriculture and other industries due to its flexibility. There are a number of advantages over other existing analytical technologies we can list, for example virtually no need for sample preparation; usually NIR does not demand sample destruction and subsequent discard; NIR provides fast results; NIR does not require extensive operator training and carries small operating costs. However, the key point about NIR technology is the fact that it's more related to statistics than chemistry or, in other words, we are more concerned about analyzing and distinguishing features within the data than looking deep into the chemical entities themselves. A simple scan reading in the NIR range usually involves huge inflows of data points. Usually we decompose the signals into hundreds of predictor variables and use complex algorithms to predict classes or quantify specific content. NIR is all about math, especially by converting chemical information into numbers. Easier said than done. A NIR signal is a very complex one. Usually the signal responses are not specific to a particular material, rather, each grouṕs responses add up, thus providing low specificity of a spectral reading. This paper proposes a simple and efficient method to analyze and compare NIR spectra for the purpose of identifying the presence of active pharmaceutical ingredients in finished products using low cost NIR scanning devices connected to the internet cloud.
Sisk, Matthew L.; Shea, John J.
2011-01-01
Despite a body of literature focusing on the functionality of modern and stylistically distinct projectile points, comparatively little attention has been paid to quantifying the functionality of the early stages of projectile use. Previous work identified a simple ballistics measure, the Tip Cross-Sectional Area, as a way of determining if a given class of stone points could have served as effective projectile armatures. Here we use this in combination with an alternate measure, the Tip Cross-Sectional Perimeter, a more accurate proxy of the force needed to penetrate a target to a lethal depth. The current study discusses this measure and uses it to analyze a collection of measurements from African Middle Stone Age pointed stone artifacts. Several point types that were rejected in previous studies are statistically indistinguishable from ethnographic projectile points using this new measure. The ramifications of this finding for a Middle Stone Age origin of complex projectile technology is discussed. PMID:21755048
Rohr, U; Mueller, C; Wilhelm, M; Muhr, G; Gatermann, S
2003-08-01
The object of this study was to investigate the efficacy of a methicillin-resistant Staphylococcus aureus (MRSA) multisite carriage decolonization in 32 hospitalized carriers--25 from surgical and seven from medical wards. Twenty-four of the patients had wounds (e.g. chronic ulcers, surgical sites) and 17 were spinal cord injury patients. Decolonization was performed by intranasal application of mupirocin, combined with an octenidine dihydrochloride bodywash over a period of five days. Samples from the nose, forehead, neck, axilla and groin were taken 24-48 h before beginning decolonization (sample point I, N=32) and 24-48 h afterwards (sample point II, N=32). Further samples, were taken seven to nine days after the procedure (sample point III, N=25). Contact sheep blood agar plates (24 cm2) were used to quantify MRSA colonies on forehead and neck. MRSA from other sample sites was determined semi-quantitatively. All patients were proven to be MRSA positive at one or more extranasal site(s); 18.8% did not have nasal carriage. The overall decolonization rate for all sites was 53.1% (sample point II) and 64% (sample point III), respectively. The reduction was significant for every site, showing a rate of 88.5% for nose (II, III) and of 56.3% (II) and 68% (III) for all extranasal sites together. Of 32 patients, a median of 6.5 cfu MRSA/24 cm2 was obtained for the forehead before decolonization and 0.5 cfu MRSA/24 cm2 for the neck. A significant reduction (0 cfu MRSA/24 cm2) from both sites was shown after treatment. Before decolonization procedures, median MRSA levels for the nose, groin and axilla were 55, 6 and 0 cfu/swab. After treatment, MRSA from each of these sites was significantly reduced. We conclude that nasal mupirocin combined with octenidine dihydrochloride whole-body wash is effective in eradicating MRSA from patients with variable site colonization.
Latent Computational Complexity of Symmetry-Protected Topological Order with Fractional Symmetry.
Miller, Jacob; Miyake, Akimasa
2018-04-27
An emerging insight is that ground states of symmetry-protected topological orders (SPTOs) possess latent computational complexity in terms of their many-body entanglement. By introducing a fractional symmetry of SPTO, which requires the invariance under 3-colorable symmetries of a lattice, we prove that every renormalization fixed-point state of 2D (Z_{2})^{m} SPTO with fractional symmetry can be utilized for universal quantum computation using only Pauli measurements, as long as it belongs to a nontrivial 2D SPTO phase. Our infinite family of fixed-point states may serve as a base model to demonstrate the idea of a "quantum computational phase" of matter, whose states share universal computational complexity ubiquitously.
Lasercom system architecture with reduced complexity
NASA Technical Reports Server (NTRS)
Lesh, James R. (Inventor); Chen, Chien-Chung (Inventor); Ansari, Homayoon (Inventor)
1994-01-01
Spatial acquisition and precision beam pointing functions are critical to spaceborne laser communication systems. In the present invention, a single high bandwidth CCD detector is used to perform both spatial acquisition and tracking functions. Compared to previous lasercom hardware design, the array tracking concept offers reduced system complexity by reducing the number of optical elements in the design. Specifically, the design requires only one detector and one beam steering mechanism. It also provides the means to optically close the point-ahead control loop. The technology required for high bandwidth array tracking was examined and shown to be consistent with current state of the art. The single detector design can lead to a significantly reduced system complexity and a lower system cost.
LaserCom System Architecture With Reduced Complexity
NASA Technical Reports Server (NTRS)
Lesh, James R. (Inventor); Chen, Chien-Chung (Inventor); Ansari, Homa-Yoon (Inventor)
1996-01-01
Spatial acquisition and precision beam pointing functions are critical to spaceborne laser communication systems. In the present invention a single high bandwidth CCD detector is used to perform both spatial acquisition and tracking functions. Compared to previous lasercom hardware design, the array tracking concept offers reduced system complexity by reducing the number of optical elements in the design. Specifically, the design requires only one detector and one beam steering mechanism. It also provides means to optically close the point-ahead control loop. The technology required for high bandwidth array tracking was examined and shown to be consistent with current state of the art. The single detector design can lead to a significantly reduced system complexity and a lower system cost.
1983-10-01
types such as the Alberta, Plainview, Scotts Aluff, Eden Valley and Hell Gap ( Plano Complex) . A private collector from Sheyenne, North Dakota--on the...Grafton) (Michlovic 1979). An apparently early type point of the Plano Complex (Alberta point) was found net: the Manitoba community of Manitou (Pettipas...with the DL-S Burial Complex include miniature, smooth mortuary vessels, sometimes decorated with incised thunderbird designs and/or raised lizzards or
A Robust False Matching Points Detection Method for Remote Sensing Image Registration
NASA Astrophysics Data System (ADS)
Shan, X. J.; Tang, P.
2015-04-01
Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.
Ucar Zennure; Pete Bettinger; Krista Merry; Jacek Siry; J.M. Bowker
2016-01-01
Two different sampling approaches for estimating urban tree canopy cover were applied to two medium-sized cities in the United States, in conjunction with two freely available remotely sensed imagery products. A random point-based sampling approach, which involved 1000 sample points, was compared against a plot/grid sampling (cluster sampling) approach that involved a...
NASA Astrophysics Data System (ADS)
Rerikh, K. V.
1998-02-01
Using classic results of algebraic geometry for birational plane mappings in plane CP 2 we present a general approach to algebraic integrability of autonomous dynamical systems in C 2 with discrete time and systems of two autonomous functional equations for meromorphic functions in one complex variable defined by birational maps in C 2. General theorems defining the invariant curves, the dynamics of a birational mapping and a general theorem about necessary and sufficient conditions for integrability of birational plane mappings are proved on the basis of a new idea — a decomposition of the orbit set of indeterminacy points of direct maps relative to the action of the inverse mappings. A general method of generating integrable mappings and their rational integrals (invariants) I is proposed. Numerical characteristics Nk of intersections of the orbits Φn- kOi of fundamental or indeterminacy points Oi ɛ O ∩ S, of mapping Φn, where O = { O i} is the set of indeterminacy points of Φn and S is a similar set for invariant I, with the corresponding set O' ∩ S, where O' = { O' i} is the set of indeterminacy points of inverse mapping Φn-1, are introduced. Using the method proposed we obtain all nine integrable multiparameter quadratic birational reversible mappings with the zero fixed point and linear projective symmetry S = CΛC-1, Λ = diag(±1), with rational invariants generated by invariant straight lines and conics. The relations of numbers Nk with such numerical characteristics of discrete dynamical systems as the Arnold complexity and their integrability are established for the integrable mappings obtained. The Arnold complexities of integrable mappings obtained are determined. The main results are presented in Theorems 2-5, in Tables 1 and 2, and in Appendix A.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, Sean Gregory; Berryman, Judy; Shackley, M. Steven
Eden projectile points associated with the Cody complex are underrepresented in the late Paleoindian record of the American Southwest. EDXRF analysis of an obsidian Eden point from a site in Sierra County, New Mexico demonstrates this artifact is from the Cerro del Medio (Valles Rhyolite) source in the Jemez Mountains. Lastly, we contextualize our results by examining variability in obsidian procurement practices beyond the Cody heartland in southcentral New Mexico.
Robust and efficient overset grid assembly for partitioned unstructured meshes
NASA Astrophysics Data System (ADS)
Roget, Beatrice; Sitaraman, Jayanarayanan
2014-03-01
This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.
Wytra̦żek, Marcin; Huber, Juliusz; Lisiński, Przemysław
Summary Spine-related muscle pain can affect muscle strength and motor unit activity. This study was undertaken to investigate whether surface electromyographic (sEMG) recordings performed during relaxation and maximal contraction reveal differences in the activity of muscles with or without trigger points (TRPs). We also analyzed the possible coexistence of characteristic spontaneous activity in needle electromyographic (eEMG) recordings with the presence of TRPs. Thirty patients with non-specific cervical and back pain were evaluated using clinical, neuroimaging and electroneurographic examinations. Muscle pain was measured using a visual analog scale (VAS), and strength using Lovett’s scale; trigger points were detected by palpation. EMG was used to examine motor unit activity. Trigger points were found mainly in the trapezius muscles in thirteen patients. Their presence was accompanied by increased pain intensity, decreased muscle strength, increased resting sEMG amplitude, and decreased sEMG amplitude during muscle contraction. eEMG revealed characteristic asynchronous discharges in TRPs. The results of EMG examinations point to a complexity of muscle pain that depends on progression of the myofascial syndrome PMID:22152435
Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data
NASA Astrophysics Data System (ADS)
Du, L.; Zhong, R.; Sun, H.; Wu, Q.
2017-09-01
An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel
Peeters, David; Chu, Mingyuan; Holler, Judith; Hagoort, Peter; Özyürek, Aslı
2015-12-01
In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odano, I.; Takahashi, N.; Ohkubo, M.
1994-05-01
We developed a new method for quantitative measurement of rCBF with Iodine-123-IMP based on the microsphere model, which was accurate, more simple and relatively non-invasive than the continuous withdrawal method. IMP is assumed to behave as a chemical microsphere in the brain. Then regional CBF is measured by the continuous withdrawal of arterial blood and the microsphere model as follows: F=Cb(t)/integral Ca(t)*N, where F is rCBF (ml/100g/min), Cb(t) is the brain activity concentration. The integral Ca(t) is the total activity of arterial whole-blood withdrawn, and N is the fraction of the integral Ca(t) that is true tracer activity. We analyzedmore » 14 patients. A dose of 222 MBq of IMP was injected i.v. over 1 min, and withdrawal of the arterial blood was performed from 0 to 5 min (integral Ca(t)), after which arterial blood samples (one point Ca(t)) were obtained at 5, 6, 7, 8, 9, 10 min, respectively. Then the integral Ca(t) was mathematically inferred from the value of one point Ca(t). When we examined the correlation between integral Ca(t)*N and one point Ca(t), and % error of one point Ca(t) compared with integral Ca(t)*N, the minimum of the % error was 8.1% and the maximum of the correlation coefficient was 0.943, the both values of which were obtained at 6 min. We concluded that 6 min was the best time to take arterial blood sample by one point sampling method for assuming the integral Ca(t)*N. IMP SPECT studies were performed with a ring-type SPECT scanner, Compared with rCBF measured by Xe-133 method, a significant correlation was observed in this method (r=0.773). One point Ca(t) method is very easy and quickly for measurement of rCBF without inserting catheters and without arterial blood treatment with octanol.« less
Higher order approximation to the Hill problem dynamics about the libration points
NASA Astrophysics Data System (ADS)
Lara, Martin; Pérez, Iván L.; López, Rosario
2018-06-01
An analytical solution to the Hill problem Hamiltonian expanded about the libration points has been obtained by means of perturbation techniques. In order to compute the higher orders of the perturbation solution that are needed to capture all the relevant periodic orbits originated from the libration points within a reasonable accuracy, the normalization is approached in complex variables. The validity of the solution extends to energy values considerably far away from that of the libration points and, therefore, can be used in the computation of Halo orbits as an alternative to the classical Lindstedt-Poincaré approach. Furthermore, the theory correctly predicts the existence of the two-lane bridge of periodic orbits linking the families of planar and vertical Lyapunov orbits.
NASA Astrophysics Data System (ADS)
Voitovich, A. P.; Kalinov, V. S.; Stupak, A. P.; Runets, L. P.
2015-03-01
Isobestic and isoemission points are recorded in the combined absorption and luminescence spectra of two types of radiation defects involved in complex processes consisting of several simultaneous parallel and sequential reactions. These points are observed if a constant sum of two terms, each formed by the product of the concentration of the corresponding defect and a characteristic integral coefficient associated with it, is conserved. The complicated processes involved in the transformation of radiation defects in lithium fluoride are studied using these points. It is found that the ratio of the changes in the concentrations of one of the components and the reaction product remains constant in the course of several simultaneous reactions.
Stanescu, T; Jaffray, D
2018-05-25
Magnetic resonance imaging is expected to play a more important role in radiation therapy given the recent developments in MR-guided technologies. MR images need to consistently show high spatial accuracy to facilitate RT specific tasks such as treatment planning and in-room guidance. The present study investigates a new harmonic analysis method for the characterization of complex 3D fields derived from MR images affected by system-related distortions. An interior Dirichlet problem based on solving the Laplace equation with boundary conditions (BCs) was formulated for the case of a 3D distortion field. The second-order boundary value problem (BVP) was solved using a finite elements method (FEM) for several quadratic geometries - i.e., sphere, cylinder, cuboid, D-shaped, and ellipsoid. To stress-test the method and generalize it, the BVP was also solved for more complex surfaces such as a Reuleaux 9-gon and the MR imaging volume of a scanner featuring a high degree of surface irregularities. The BCs were formatted from reference experimental data collected with a linearity phantom featuring a volumetric grid structure. The method was validated by comparing the harmonic analysis results with the corresponding experimental reference fields. The harmonic fields were found to be in good agreement with the baseline experimental data for all geometries investigated. In the case of quadratic domains, the percentage of sampling points with residual values larger than 1 mm were 0.5% and 0.2% for the axial components and vector magnitude, respectively. For the general case of a domain defined by the available MR imaging field of view, the reference data showed a peak distortion of about 12 mm and 79% of the sampling points carried a distortion magnitude larger than 1 mm (tolerance intrinsic to the experimental data). The upper limits of the residual values after comparison with the harmonic fields showed max and mean of 1.4 mm and 0.25 mm, respectively, with only 1.5% of sampling points exceeding 1 mm. A novel harmonic analysis approach relying on finite element methods was introduced and validated for multiple volumes with surface shape functions ranging from simple to highly complex. Since a boundary value problem is solved the method requires input data from only the surface of the desired domain of interest. It is believed that the harmonic method will facilitate (a) the design of new phantoms dedicated for the quantification of MR image distortions in large volumes and (b) an integrative approach of combining multiple imaging tests specific to radiotherapy into a single test object for routine imaging quality control. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.